Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Monday, September 15, 2014

Richard Jones: No Uploads For You!

Anti-nanocornucopian Richard Jones offers up a fine technical debunking of techno-immortalizing "uploads" in his latest Soft Machines post, Your Mind Will Not Be Uploaded:
I start by asking whether or when it will be possible to map out... all the connections between [the brain's] 100 billion or so neurons. We’ll probably be able to achieve this mapping in the coming decades, but only for a dead and sectioned brain; the challenges for mapping out a living brain at sub-micron scales look very hard. Then we’ll ask some fundamental questions about what it means to simulate a brain. Simulating brains at the levels of neurons and synapses requires the input of phenomenological equations, whose parameters vary across the components of the brain and change with time, and are inaccessible to in-vivo experiment. Unlike artificial computers, there is no clean digital abstraction layer in the brain; given the biological history of nervous systems as evolved, rather than designed, systems, there’s no reason to expect one. The fundamental unit of biological information processing is the molecule, rather than any higher level structure like a neuron or a synapse; molecular level information processing evolved very early in the history of life. Living organisms sense their environment, they react to what they are sensing by changing the way they behave, and if they are able to, by changing the environment too. This kind of information processing, unsurprisingly, remains central to all organisms, humans included, and this means that a true simulation of the brain would need to be carried out at the molecular scale, rather than the cellular scale. The scale of the necessary simulation is out of reach of any currently foreseeable advance in computing power.
As I said, Jones is offering up a mostly "technical" debunking of the kind that enthusiasts for techno-transcendental conceits decry the lack of in my own critiques. But Jones is far from denying that these technical discussions are embedded in rhetorical frames, narratives, metaphorizations, conceptual problematics from which they derive much of their apparent intelligibility and force even if such discursive operations are not his focus.

You will notice, for example, that even in his brief summary above the notion of "simulating brains" is figuring prominently. About this he declares earlier on:
I want to consider two questions about mind uploading, from my perspective as a scientist. I’m going to use as an operational definition of “uploading a mind” the requirement that we can carry out a computer simulation of the activity of the brain in question that is indistinguishable in its outputs from the brain itself. For this, we would need to be able to determine the state of an individual’s brain to sufficient accuracy that it would be possible to run a simulation that accurately predicted the future behaviour of that individual and would convince an external observer that it faithfully captured the individual’s identity. I’m entirely aware that this operational definition already glosses over some deep conceptual questions, but it’s a good concrete starting point. My first question is whether it will be possible to upload the mind of anyone reading this now. My answer to this is no, with a high degree of probability, given what we know now about how the brain works, what we can do now technologically, and what technological advances are likely in our lifetimes. My second question is whether it will ever be possible to upload a mind, or whether there is some point of principle that will always make this impossible. I’m obviously much less certain about this, but I remain sceptical.
It's truly important that Jones insists such a discussion "glosses over some deep conceptual questions" but I wonder why this admission does not lead to a qualification of the predicate assertion, "it’s a good concrete starting point." To the extent that "uploading" is proffered by futurologists as a techno-immortalization scheme it isn't at all clear that even a successful "simulation" would satisfy the demands that invest their scheme. I flog the talking point that "you are not a picture of you" endlessly to make this point, but one might just as easily point out that nobody seriously entertains the substitution of one person by a longer-lived imposter as a viable life-extension method. And while I would agree that selfhood is substantiated in an ongoing way by observers, I think it is important to grasp that these observations have objective, but also subjective and inter-subjective dimensions none of which are adequate on their own and all of which supplement one another -- and also that these observations are not merely of "already existing" characteristics but of sociocultural scripts and norms through which selves are constructed/enacted in time. Uploading discussions tend to deploy radically impoverished understandings not only of selfhoods themselves but the terms of their substantiation. Again, Jones does not deny any of this, and he tends to be enthusiastically open to such considerations, but I wonder whether technical debunkings that circumvent such considerations at their point of departure don't end up smuggling in more of the reductionist nonsense he critiques as much as I do than he would like to do.

Another case in point, in Jones' truly welcome intervention into the work of metaphors in such discussions:
[T]o get anywhere in this discussion, we’re going to need to immunise ourselves against the way in which almost all popular discussion of neuroscience is carried out in metaphorical language. Metaphors used clearly and well are powerful aids to understanding, but when we take them too literally they can be badly misleading. It’s an interesting historical reflection that when computers were new and unfamiliar, the metaphorical traffic led from biological brains to electronic computers. Since computers were popularly described as “electronic brains”, it’s not surprising that biological metaphors like “memory” were quickly naturalised in the way computers were described. But now the metaphors go the other way, and we think about the brain as if it were a computer (I think the brain is a computer, by the way, but it’s a computer that’s so different to man-made ones, so plastic and mutable, so much immersed in and responsive to its environment, that comparisons with the computers we know about are bound to be misleading). So if what we are discussing is how easy or possible it will be to emulate the brain with a man-made computer, the fact that we are so accustomed to metaphorical descriptions of brains in terms of man-made computers will naturally bias us to positive answers.
This is music to my ears, but I have to wonder if these considerations really go far enough. (I'm a rhetorician for whom figurative language is the end-all be-all, so a working scientist like Jones might fairly question whether I would ever be satisfied on this score.) A scholar like Katherine Hayles has done extensive historical research into the ways in which the metaphors Jones is talking about here actually formed information science and computer science disciplines from their beginnings, so creating the conceptual terrain on which computers would seem plausibly describable later as "electronic brains" in the first place, an abiding conceptual terrain eventuating later still in the more recent reductions of discursive and cultural dynamics to "memes" and "viralities" -- or critical interventions into them nonetheless as efforts at a kind of "immunization," for example. Jones' talk about how we have been trained to treat glib biological and informational identifications as neutrally descriptive reaches deeper even than he reveals: how else do we account for the paradoxical proposal of his parenthesis that the brain is properly identified as a computer, while at once the brain is disanalogous with any actual computer? These associations are, as Jones says, so deeply ingrained as to be "naturalized." For me, it is enormously interesting that minds have so often been metaphorized as prostheses -- before its figuration as computer the mind has been mirror, blank slate, distributed steam pipes -- and that new figures do not displace old ones even when they are at odds. Freud's steampunk mind of repressions, displacements, projections, outlets lives on in the discourse of many who have made the digital turn to the computational mind. Who knows how or why exactly?

I find nicely provocative Jones speculative proposal that "the origin of van der Waals forces, as a fluctuation force, in the quantum fluctuations of the vacuum electromagnetic field... could be connected to some fundamental unpredictability of the decisions made by a human mind" and I am pleased that he takes care to distinguish such a proposal from theories like that of Roger Penrose that "the brain is a quantum computer, in the sense that it exploits quantum coherence" (since, as he points out, it... [is] difficult to understand how sufficient coherence could be maintained in the warm and wet environment of the cell"). For me, it is not necessary to save an ontic indeterminism traditionally ascribed to human minds through such expedients, since I was convinced well over twenty years ago by Rorty's argument in "Non-Reductive Physicalism" (from Objectivity, Relativism, and Truth, Cambridge: 1991, pp. 114-115) that one can be quite "prepared to say that every event can be described in micro-structural terms" while at once conceding that "[f]or most interesting examples of X and Y (e.g., minds and bodies, tables and particles) there are lots of true sentences about X's in which 'Y' cannot be substituted for 'X' while preserving truth... This is because any tool which has been used for some time is likely to continue to have a use...  a tool can be discarded... [but i]n such cases X-talk just fades away; not because someone has made a philosophical or scientific discovery that there are no X's... [nor] by 'linguistic analysis,' but, if at all, in everyday practice." I am cheerful about the prospect that the free will indispensable to my sense of selfhood may be a perspectival or discursive effect, but however poetically or scientifically potent its jettisoning might eventually become, dispensing with it would unquestionably be stupid and sociopathic for now rather than saying better the way the world is or speaking more in the language the universe prefers to be described in or any nonsense of the sort. 

I doubt that saying so would go very far toward convincing Jones -- any more than most transhumanists, for that matter -- that my own preferred philosophical and rhetorical arguments are more clarifying than their preferred technical skirmishing over the state-of-the-art and projected technodevelopmental timelines. But, again, I do worry that accepting enough figurative (rhetorical) and conceptual (philosophical) assumptions to have mutually intelligible "technical" discussions with techno-transcendentalists, especially when there really is no need to do so, simply concedes too much ground to them for resulting debunkery at its best to do much good -- they can always respond, after all, with minute nonthreatening qualifications or terminological shifts that leave you debating angels on pinheads at the level of detail interminably.

I am quite sure Jones is alive to this very worry, as he concludes with a practical consideration that looms large in my critiques of the futurologists as well:
[I]deas like mind uploading are not part of the scientific mainstream, but there is a danger that they can still end up distorting scientific priorities. Popular science books, TED talks and the like flirt around such ideas and give them currency... that influences -- and distorts -- the way resources are allocated between different scientific fields. Scientists doing computational neuroscience don’t themselves have to claim that their work will lead to mind uploading to benefit from an environment in which such claims are entertained by people like Ray Kurzweil, with a wide readership... I think computational neuroscience will lead to some fascinating new science, but you could certainly question the proportionality of the resource it will receive compared to, say, more experimental work to understand the causes of neurodegenerative diseases.
As I point out above, the effort critically to address techno-transcendental formulations on something like their own terms can smuggle prejudicial and reductive assumptions, frames, and metaphorizations into the discourse of even their critics in ways that circumscribe deliberation on these questions and so set the stage for the skewed public policy language and funding priorities and regulatory affordances that Jones points to here.

As a demonstration of how easily this can happen, notice that when Jones offhandedly declares that "[i]t’s unquestionably true, of course, that improvements in public health, typical lifestyles and medical techniques have led to year-on-year increases in life expectancy," the inevitable significance with which techno-transcendentalists freight such claims remains the furthest thing imaginable from an "unquestionable tru[th]" (Jones declares it "hollow" just a few sentences later) and yet the faith-based futurological frame itself remains in force even as he proceeds with his case: Needless to say (or it should be), improvements in prenatal care, childhood nutrition and disease treatment can yield year-on-year increases in life expectancy without year-on-year increases in life expectancy for people over the age of sixty-five, for example, and even if improvements in the treatment of heart disease and a few other chronic health conditions of older age yield some improvement for that cohort as well, this can and does remain compatible with absolute stasis of human longevity at its historical upper bound even if presently intractable neurodegenerative diseases are ameliorated, thus bedeviling altogether the happy talk of techno-immortalists pretending actuarial arrows on charts are rocketing irresistibly toward 150 year lifespans even in the absence of their handwaving about nanobotic repair swarms and angelic mindclone uploads. It is not an easy thing to address a critique to futurologists on terms they will not dismiss as hate speech or relativistic humanities mush and yet continue to speak sense at all. Richard Jones continues to make the effort to do so -- and succeeds far better than I do at that -- and for that I am full of admiration and gratitude, even if I devote my energies in response to his efforts to sounding warnings anyway.

11 comments:

jimf said...

> > It’s an interesting historical reflection that when computers
> > were new and unfamiliar, the metaphorical traffic led from biological
> > brains to electronic computers. Since computers were popularly described
> > as “electronic brains”, it’s not surprising that biological metaphors
> > like “memory” were quickly naturalised in the way computers were described.
>
> A scholar like Katherine Hayles has done extensive historical research
> into the ways in which the metaphors Jones is talking about here
> actually formed information science and computer science disciplines
> from their beginnings, so creating the conceptual terrain on which computers
> would seem plausibly describable later as "electronic brains". . .
> Jones' talk about how we have been trained to treat glib biological
> and informational identifications as neutrally descriptive reaches
> deeper even than he reveals: how else do we account for the paradoxical
> proposal of his parenthesis that the brain is properly identified as a computer,
> while at once the brain is disanalogous with any actual computer?

Well, back in the day, the "biological and information identifications"
weren't entirely "glib", and they weren't entirely the domain of naive
journalists either, they had some pretty heavy intellectual firepower
behind them.

From _The Dream Machine: J. C. R. Licklider and the Revolution That
Made Computing Personal_ by M. Mitchell Waldrop, pp. 53 - 61:
------------------
In the autumn of 1940. . . [MIT mathematician Norbert] Wiener was. . .
deep into a problem of desperate practicality: antiaircraft fire
control. . . English gunners simply could not react fast enough
to the German pilots' twists and turns. . . [But] no pilot could
ever make is trajectory completely random. . . [Constraints] might
make his trajectory predictable. . . Mathematically. . . it would be
like extracting a meaningful "signal" that was embedded in the
random "noise" of the aircraft's random motion. . .

[B]y January 1941, with the able assistance of an electrical engineer
named Julian Bigelow. . . "I had become engaged in the study of
a mechanico-electrical system which was designed. . . [to forecast]
the future. . ."

Seeing [this] done by an inanimate machine was. . . a cause for
astonishment in 1941. Indeed, it went straight to the heart of. . .
the mind-body problem. Ordinary physical matter ("body") is inherently
passive. . . In the physical world, every effect requires a
cause -- and the cause always comes first. . .

However, human beings (and other living things). . . have autonomy.
[They] can take action. . . have goals, expectations, desires,
**purpose**. . .

Wiener and Bigelow had just produced a purely physical device that
took action based on a prediction. . . [T]heir fire-control
apparatus had "causes" that lay in the future. . . for a very
deep reason: **feedback**. . .

[T]he attacking aircraft's **predicted** trajectory was constantly
being updated through feedback from its **actual** trajectory. . .

[D]id feedback also apply to voluntary action? Definitely, Wiener
and Bigelow argued. . .

[W]hen they consulted with. . . Arturo Rosenblueth, a. . . neurophysiologist. . .
at the Harvard Medical School, they learned that. . . injury to the
cerebellum. . . [will cause] a patient. . . [to] overshoot the mark
and then go into uncontrollable oscillation -- exactly as the
mathematical theory of feedback predicted. . .

In 1942. . . they. . . [laid] out their conclusions publicly at
a neurophysiology meeting in New York sponsored by the Josiah
Macy Foundation. . .

jimf said...

The anthropologist Gregory Bateson would look back on that moment
more than thirty years later and still marvel: "The central problem
of Greek philosophy -- the problem of purpose, unsolved for
2,500 years -- came within range of rigorous analysis." . . .

Warren McCulloch. . . a professor of psychiatry at the University of
Illinois Medical School in Chicago. . . came away from the Macy
meeting an inspired man. If one simple feedback loop was enough to
endow a machine with purpose, then how much more could millions
or billions of feedback loops accomplish? . . .

McCulloch. . . in collaboration with Walter Pitts, an eighteen-year-old
mathematics prodigy at the University of Chicago. . . assumed that
the brain as a whole could be modeled as a vast, interconnected
electrical circuit, with neurons serving as both the wires and the
switches. . .

The result -- today it would be known as a "neural network" model --
was admittedly a gross oversimplification of reality. But McCulloch
and Pitts argued that it did capture the abstract essence of brain
physiology. . .

Their own paper, published in 1943 as "A Logical Calculus of the Ideas
Immanent in Neural Activity," was essentially a demonstration that
their idealized neural networks were functionally equivalent to
Turing machines. . . As the science historian William Aspray
has written, "With the Turing machines providing an abstract characterization
of thinking in the machine world and McCulloch and Pitts's neuron
nets providing one in the biological world, the equivalence result
suggested a unified theory of thought that broke down barriers
between the physical and biological worlds." Or, as McCulloch himself
would put it in his 1965 autobiography _Embodiments of Mind_, he
and Pitts had proved the equivalence of all general Turing machines,
whether "man-made or begotten."

John von Neumann was deeply impressed by McCulloch and Pitts's neural-
network ideas from the moment he saw their paper ("Johnny," was
Norbert Wiener's message to his frequent correspondent, "You've got to
read this thing!" . . .

It was essentially impossible to think about computers in those days
without thinking about brains as well. Here, for the first time in
history, was a machine that could **think** -- or at least do arithmetic,
compile data, integrate differential equations, and accomplish all
manner of things that had once required human intelligence. Some
researchers, including Howard Aiken [of the Harvard Mark I], found
the computer-brain analogy simplistic, misleading, and even dangerous;
others, such as von Neumann and Wiener, found it subtle, illuminating,
and provocative. But nobody could ignore it. . .

von Neumann found the analogy so compelling that in late 1944 he joined
forces with Wiener and Aiken to organize a conference on the subject. . .
at Princeton on January 6-7, 1945. . .

jimf said...

[T]hat Princeton meeting in 1945 meant that McCulloch and Pitts's
neural-network ideas were fresh in von Neumann's mind a few months
later as he began to draft his report on the EDVAC [the
"Electronic Discrete-Variable Automatic Computer", a design for
a "stored-program" digital computer]. . .

[A]s von Neumann went on to describe the five functional units
of his abstract computer, he referred to them as "organs" and
went out of his way to make provocative comparisons with biological
functions. . . [He compared] the **input** units of the machine
with the sensory neurons of the brain. . . the **output** units
with the motor neurons of the brain. . . the remaining three
functions with the associative neurons, which are devoted to
abstract thought. . . [T]he **memory** unit would be the
computer's electronic scratch pad. . .
====

From _Building IBM: Shaping an Industry and Its Technology_ by
Emerson W. Pugh, p. 137:
------------------
Von Neumann's use of neurological analogies and terms outraged
[J. Presper] Eckert [one of the designers of the ENIAC,
whose work had been revealed to von Neumann] because it enabled
von Neumann to give unclassified talks about work at the
Moore School [of Engineering, at the University of Pennsylvania,
where ENIAC was built] "without giving any credit" to anyone
else. "I was too young to know how to fight back against this
type of behavior." Eckert recalls. Meanwhile Eckert and
[John W.] Mauchly [co-designer of ENIAC] were not able to talk
about their work because it was performed under a government
security classification of "confidential." Their first detailed
engineering progress report on the EDVAC (completed three months
after von Neumann's highly abstract, theoretical treatment)
was available primarily to project participants and
administrators. . .
====

Dale Carrico said...

Definitely, yes, Wiener, Bateson, von Neumann are central players in the Hayles account I mentioned.

jimf said...

This article was published around the time Jeff Hawkins
started his AI company "Numenta" (in 2005 -- nine whole years
ago now, but nothing dramatic has happened in the meantime.
I gather that Numenta still exists, but its main product
"Grok" seems to be a software package for monitoring an
IT department's systems and keeping them running smoothly --
"catches abnormal increase in latency", "detects a bad code
push", "identifies an unusual server pattern", etc.
http://numenta.com/grok/#resources ).

http://www.skeptic.com/reading_room/artificial-intelligence-gone-awry/
(via
http://www.cryonet.org/cgi-bin/dsp.cgi?msg=29707 )

jimf said...

Am I an AI? Is your cat a robot? Or your car?
Who _Her_?

In a comment thread on his Web site a few years ago,
Mike "Darwin" (Federowicz) offered some interesting
observations on how to tell if the acceleration is
acceleratin' into the Robopocalypse.

http://chronopause.com/chronopause.com/index.php/2011/04/19/cryonics-nanotechnology-and-transhumanism-utopia-then-and-now/index.html
-----------------------
Mike Darwin says:
April 22, 2011 at 4:55 pm

Maybe what is needed here are specific indicators that mature AI is
foreseeable and/or imminent. The idea of computer malware was foreseeable
long before the development of the www as it exists today (and has
existed for 20 years, now). An important point about viruses and
other malware is that they didn’t happen by accident. They required
(and still do) purposeful design of a vary high order. They are also
a direct product of a vast body of technological developments in
computing, microelectronics, and yes, software.

If we look back over the history of computing, starting say,
with ENIAC, you can retrospectively, and thus in theory prospectively,
enumerate a long list of prerequisite technological developments
that were necessary before malware could become a reality, let
alone a threat.

So, in this spirit, I offer the following:

1) The very nature of intelligence, probably a core property, is to
be able to rank and prioritize information from the environment
and weight it as to it’s likelihood of causing harm or providing
benefit. Being unable to do those would mean that we would spend
all of our time chasing every possible benefit and preparing for
every possible threat. Thus, we should behave intelligently with
respect to risks like AI, which may be extreme, but are also distant.

2) There will be sentinel events that indicate that AI is approaching
as a possibility that merits the expenditure of resources to
investigate and defend against credible threats.

3) Here are some examples of what I think are likely “signs” that
will necessarily precede any possible “AI Apocalypse”:

A) Widespread consumer use of fully autonomous automobiles which
drive themselves to their passenger-specified destination.

B) Significant (~25%) displacement of cats and dogs with cybernetic
alternatives as pets. Biological pets have many disadvantages,
not the least of which are that they grow old and die, get sick,
cannot be turned off while you go on vacation. Any mechanism that
effectively simulate the psychologically satisfying aspect of
companion animals will start to displace them.

C) Significant (~25%) fraction of the population spends at least
50% of their highest value (most intimate) social interaction in time
with a program entity designed to be a friend: receive confidences,
provide counseling, assist in decision making, provide constructive
criticisms, provide sympathy and encouragement. The kind of program
might perhaps be best be understood by imagining a very sophisticated
counseling program,, merged with GPS, Google search, translation,
and technical advice/decision making programs. The first iteration
of these kinds of programs will probably come in the form of highly
interactive, voice interrogate-able expert systems software for things
like assisted medical decision making and complex customer service
interactions. However, what most people want most in life is not to
be lonely; to have someone to share their lives with. This means
that any synthetic entity must necessarily be able to model their
reality and to determine and then “share” their core values. Friends
can be very different from us, but they must at a minimum share
certain core values and goals.

jimf said...

They may have different “dreams” and different specific objectives in
life, but they must have some that map onto our own. The first marginally
to moderately effective synthetic friends will be enormous commercial
successes and will be impossible to miss as a technological development.
They may no be able to help you move house, or share your sexual
frustrations, but if they share your passions and longings and can
respond in kind, even in a general way, then they will be highly
effective. It will also help a lot in that they can be with you constantly
and answer almost any question you have. If you want to know what time
it is Islamabad or how the Inverse Square law works, you can simply ask.
And not only will they know the answer, they will know the best way
of communicating it to you. For some, the best answer about the
Inverse Square Law will be to show the equation, for others it will be
a painstaking tutorial, and for others still, a wholly graphic exposition.
In this sense, this kind of AI will be the BEST friend you have ever had.

Fred Pohl came very close to this concept in THE AGE OF THE PUSSYFOOT
with his “Joymaker.” While the Joymaker was all these things, it wasn’t
friendly – it didn’t model Man Forrester’s mind and generate empathy,
and empathy derived counseling. And it didn’t know how to get Man Forrester’s
attention and make him care about the advice it had to give him.

D) Blog Spam which is indistinguishable from real commentary: even when
it proceeds at a very high level of commentary and interrogation.

Food for thought, in any event. Maybe Mark, Abelard or Luke can come
up with a more definite set of signs: SEVEN SIGNS OF THE IMPENDING AI
APOCALYPSE would be a catchy title. This is an example of the kind of
thing that is really useful in protecting against and preparing for
mature AI.

I would also note that there is a consistent failure in predicting the
future of technologies and of their downsides focusing on their
implementations writ large. No SF writer envisioned tiny personal
computers; and nuclear fatalists projected nation-state mediated
annihilation via MAD. Evil computers were seen to be big machines
made by big entities, as in the Forbin Project.

Of course, there are now big machines/networks and our very lives now
depend upon them (power grid, smart phone network, air traffic control…)
however, it is the little guys who represent the threat by being to
crash these systems and keep them down for weeks or months.

If we look at the threat from AI, it seems to me it is most likely to
come from intense commercial pressure and competition to produce
the kinds of things we want most; namely friends who understand us
and care about us. What wouldn’t we give to have someone who would
not only listen to us and empathize with us, but who would also
focus and bring to bear all the knowledge and expertise available
on the Internet to help us reach our personal goals? That’s where
the money is and that’s where the danger is. A life coach with the
intelligence of god and the relentlessness of the devil.

Remember the Krell and the monsters from the Id.

Mike Darwin
====

Richard Jones said...

Thanks for your kind words; as for your (gentle) criticisms I don’t really disagree with them, so this isn’t so much a rebuttal of those criticisms but an explanation of why I wrote this the way I did.

Do I concede too much to the transhumanists by engaging on their chosen, “technical”, grounds? I’m actually very sympathetic to your position that the ideological and rhetorical dimensions of these arguments are primary. But it’s still worth addressing the questions on a technical level, if only to counter the claim often made by enthusiasts for superlative technologies that since no-one has explicitly rebutted the technical claims, they must therefore be correct. Also, frankly, I enjoy writing about science that interests me.

But part of what I wanted to do in this piece was reclaim the idea of what a “technical” discussion of “uploading” should look like. This isn’t something you do by appealing to your expertise writing computer programs, it needs to be informed by actual science. I am aware of the possibility that this kind of discussion does permit “smuggling in reductionist nonsense”, but here perhaps I’d like to reclaim reductionism as well. Reductionism is, after all, a very powerful way of thinking about problems, but if you’re going to use reductionism you should do it properly. Being a reductionist isn’t a matter of macho intellectual posing; reductionism is a problem-solving tool, powerful when used for appropriate situations, which is about actually trying to unpick the components of the system you are trying to understand and understand the mechanisms by which they fit together. Of course it’s not the only conceptual framework you need - it’s not even the only tool in the physicists box; for example emergence is quite a precisely defined and very important concept in the physics tradition I’ve been raised in.

So, if someone says that the way the mind works is simply a result of "neurons firing” this isn’t actually reductionist at all. To start with, it’s a metaphor, and the physical process the metaphor describes is actually a higher order, collective phenomenon. One key message of my piece is that a properly reductionist description of how the brain works talks about ion channels opening and closing, proteins being phosphorylated, neurotransmitters diffusing, and so on, it’s not about circuits and modules or similar technical-sounding, but actually metaphorical descriptions.

Richard Jones said...

I should expand a little on the paradoxical way in which I explained why the computer metaphor of the mind is likely to be misleading, and then went on to say after the all that the brain is a computer - perhaps I was being too gnomic. When we talk about a computer, we can be thinking of the actually existing computers we all use, based on von Neumann architectures, materialised with CMOS technology. But when we talk about computers we can also be thinking of the very rich tradition, going back to Turing and others, of formal theorising about the abstract notion of computers in general. Our current technological approach to building a computer is just one of a very wide space of possibilities, one that we happen to have been locked into by all the interesting path-dependent processes of technological development. I wanted to emphasise that while brains are nothing like “computers" as understood as an object of current technology, I don’t see any reason to suppose that there’s anything magic about them that’s not encompassed by the realm of theoretical computer science. Perhaps in retrospect I should have written “the brain does computation” rather than “the brain is a computer” because of the tendency of reductionist readers to reflexively interpolate an unwarranted “just” between “is” and “a”.

I’m glad you appreciated the section on the fluctuation force character of van der Waals forces; my motive there wasn’t a perceived need to find some grounds to save the indeterminacy of human agency, but to bring people’s attention to a bit of physics that I think is well understood, but not widely known. There’s a tendency to think of an appeal to quantum indeterminacy as a piece of hand-waving obscurantism, but I want to emphasise that there is a very clear mechanism by which quantum indeterminacy manifests itself in the microscopic world, and there really is no excuse to imagine that the physics that underlies the operation of the brain is deterministic. It just isn’t (as far as we know now, and unless some surprising and new physics emerges).

Your distinction between life expectancy and maximum lifespan is of course entirely correct and was very much in my mind.

jimf said...

The first part of my (typical cut-and-paste job -- hi, Michael! ;-> )
last comment seems to have disappeared. Not that it was all that
important, but because the remaining fragment looks incoherent,
here it is:

Are my blog comments really coming from an AI? Is your cat (or
your car) a robot? Who _Her_?

Mike "Darwin" (Federowicz), a few years ago, offered up in a
comment thread on his own Web site some interesting remarks on
the signs and portents that we might be on the watch for as
indications that we might **really and truly** be accelerating
into the Robopocalypse:

http://chronopause.com/chronopause.com/index.php/2011/04/19/cryonics-nanotechnology-and-transhumanism-utopia-then-and-now/index.html
-----------------------
Mike Darwin says:
April 22, 2011 at 4:55 pm

Maybe what is needed here are specific indicators that mature AI is
foreseeable and/or imminent. The idea of computer malware was foreseeable
long before the development of the www as it exists today (and has
existed for 20 years, now). An important point about viruses and
other malware is that they didn’t happen by accident. They required
(and still do) purposeful design of a vary high order. They are also
a direct product of a vast body of technological developments in
computing, microelectronics, and yes, software.

If we look back over the history of computing, starting say,
with ENIAC, you can retrospectively, and thus in theory prospectively,
enumerate a long list of prerequisite technological developments
that were necessary before malware could become a reality, let
alone a threat.

So, in this spirit, I offer the following:

1) The very nature of intelligence, probably a core property, is to
be able to rank and prioritize information from the environment
and weight it as to it’s likelihood of causing harm or providing
benefit. Being unable to do those would mean that we would spend
all of our time chasing every possible benefit and preparing for
every possible threat. Thus, we should behave intelligently with
respect to risks like AI, which may be extreme, but are also distant.

2) There will be sentinel events that indicate that AI is approaching
as a possibility that merits the expenditure of resources to
investigate and defend against credible threats.

3) Here are some examples of what I think are likely “signs” that
will necessarily precede any possible “AI Apocalypse”:

A) Widespread consumer use of fully autonomous automobiles which
drive themselves to their passenger-specified destination.

B) Significant (~25%) displacement of cats and dogs with cybernetic
alternatives as pets. Biological pets have many disadvantages,
not the least of which are that they grow old and die, get sick,
cannot be turned off while you go on vacation. Any mechanism that
effectively simulate the psychologically satisfying aspect of
companion animals will start to displace them.

C) Significant (~25%) fraction of the population spends at least
50% of their highest value (most intimate) social interaction in time
with a program entity designed to be a friend: receive confidences,
provide counseling, assist in decision making, provide constructive
criticisms, provide sympathy and encouragement. The kind of program
might perhaps be best be understood by imagining a very sophisticated
counseling program,, merged with GPS, Google search, translation,
and technical advice/decision making programs. The first iteration
of these kinds of programs will probably come in the form of highly
interactive, voice interrogate-able expert systems software for things
like assisted medical decision making and complex customer service
interactions. However, what most people want most in life is not to
be lonely; to have someone to share their lives with. This means
that any synthetic entity must necessarily be able to model their
reality and to determine and then “share” their core values. Friends
can be very different from us, but they must at a minimum share
certain core values and goals.

jimf said...

Richard Jones wrote:

> I’m glad you appreciated the section on the fluctuation force
> character of van der Waals forces; my motive there wasn’t a
> perceived need to find some grounds to save the indeterminacy
> of human agency, but to bring people’s attention to a bit of
> physics that I think is well understood, but not widely known.
> There’s a tendency to think of an appeal to quantum indeterminacy
> as a piece of hand-waving obscurantism, but I want to emphasise
> that there is a very clear mechanism by which quantum indeterminacy
> manifests itself in the microscopic world, and there really
> is no excuse to imagine that the physics that underlies the
> operation of the brain is deterministic. It just isn’t (as
> far as we know now, and unless some surprising and new physics emerges).

I wrote to an acquaintance 11 years ago:
----------------
I did read a book a few months ago by a psychiatrist turned
physicist (an Orthodox Jew whose name unfortunately turns up in the
literature about "cures" for homosexuality and who has also
written about the so-called Bible Code -- not a particularly
auspicious curriculum vitae, IMHO; clearly, the guy's got
an agenda) named Jeffrey Satinover, called _The Quantum Brain_
(yes, yes, dramatic eye rolls here, chortle to your heart's
content; but the book was interesting) in which I heard for
the first time that protein folding may depend on quantum
processes (a form of tunnelling, IIRC) in order to proceed
as quickly as it does to its minimum energy state. I think
the general term for that and similar processes is "quantum
annealing". In any case, the author points out that if
there are any such points of contact between the quantum and
the macro world, amplifying quantum effects upward, then
life in its 4-billion-year history of evolution has assuredly
picked up on means to leverage such effects to its advantage.
====

In _Bright Air, Brilliant Fire_, the late Gerald M. Edelman
pointed out (p. 225) that in human-built computers, "the small
deviations in physical parameters that do occur (noise levels,
for example) are ignored by agreement and design."
And "There is no ambiguity in the interpretation of physical states as symbols
because the symbols are represented digitally according to rules
in a syntax. The system is **designed** to jump quickly between
defined states and to avoid transition regions between them..."

But in a biological brain "there can be many alternative
means and pathways, competitively selected out of a large
population of variant possibilities, which can accomplish more or
less adequately the same functional task. The detailed physical
structures which participate in a given task will therefore vary
stochastically among brains depending on the vagaries of chance
and personal history: no two brains (even those of identical
twins) will contain identical populations of neurons or be wired
identically."

If Edelman and other "neural Darwinists" (Jean-Pierre Changeux et al.)
are right, then **noise** (variation) is essential to the whole
process.