Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Friday, November 20, 2015

Relevant Expertise in the Critique of Robocultism

Longtime readers of Amor Mundi will be amused to see that my current twitter exchanges with robocultists trundle along the same well-worn grooves as they did a decade ago. Must be because all that accelerating acceleration of acceleration is disrupting everything to shock level one!

73 comments:

jimf said...

> my current twitter exchanges with robocultists trundle along the
> same well-worn grooves as they did a decade ago.

Yes, but now you get to hear from a whole new generation of
'em (where are the Rokos and Pecos of yesteryear?).

Who is Gareth Nelson? Presumably the guy referenced in:
https://en.wikipedia.org/wiki/BitInstant
25ish, I'd guess, if he's approximately the same age as:
https://en.wikipedia.org/wiki/Charlie_Shrem

Mr. Nelson wonders:

> So what happens when we get computational models of
> mammal brains?

Then you'll have egg all over your grizzled face, Dale!

Or not.

http://mathbabe.org/2015/10/20/guest-post-dirty-rant-about-the-human-brain-project/
----------
This is a guest post by a neuroscientist who may or may not be
a graduate student somewhere in Massachusetts. . .

Dale Carrico said...

I do like that, grizzled!

jimf said...

Speaking of the good old days, Michael Anissimov's "Accelerating Future"
blog seems to have accelerated out of existence.

The last viable Wayback Machine snapshot, from 17 July 2015,
https://web.archive.org/web/20150717035527/http://www.acceleratingfuture.com/michael/blog/
has an ad for a book by Anissimov, _Our Accelerating Future_,
published (according to Amazon) on August 21, 2015
http://www.amazon.com/gp/product/B014B3N9VS
But the last blog post there is dated 5 Feb 2014 (entitled
"Scientists Image an Entire Flatworm Brain in Realtime").

The very next Wayback Machine snapshot brings up the notice
"This domain name expired on 9/15/2015 and is pending renewal
or deletion."

I guess The Future has migrated to Twitter, huh?

;->

Dale Carrico said...

At any rate, the Stoopid Footure has done. Poor Michael, of course, has drifted from being a sad bottle-washer for guru-wannabe Eliezer into the more fetid swamps of ("Neo-")Reaction where the bottles are unwashed. It is striking how incriminating fingerprints vanish as the imperishable spirit-stuff of the cyberspace into which so many of our foolish futurological friends want to upload themselves breeze and break and bleed away in the buggy buzz (with a clown like Prisco these vanishing acts are a matter of art -- he changed blogs and sites and movements -- well, their titles at any rate, the ideas remain static in the usual manner of accelerating change artists -- like other people change underpants). The facile fallacies of the early transhumanoid web are endlessly recycled, but only those of us with old-fashioned memories of their nonsense remain to tell the tale. Lucky us.

Gareth Nelson said...

Thought i'd jump in here as i've been mentioned.

For those wondering, i'm 28 now so I suppose you could say i'm "the new generation", if that's at all relevant.

Anyway......

Regarding the human brain project and that link, I had a quick look at it and I partly agree with it, where I disagree is with the concept that technical problems and gaps in knowledge are somehow a fundamental obstacle.

A full refutation of that article point by point would probably not be appreciated here but one thing I will point out is it seems the most relevant technical issues right now boil down to obtaining synaptic weights from scans and simulation of chemical environment. After that, we have the problem of computing power and I personally believe that simulating a full mammalian brain in realtime is not possible with traditional architectures (or at least not with any kind of realistic amounts of hardware), instead you want some custom hardware that implements the mathematical models of neuron behaviour in silicon using something like ktRAM.

Something that is commonly pointed out is that we don't know every detail of how brains work - which is a fact so true it seems almost pointless to state it. The jump however from "we don't know every tiny detail" to "therefore we can't replicate it ever" is a huge fallacy.

To use an analogy, lots of people have built electronic circuits from diagrams without knowing the full details of how they work (due to lack of knowledge), but they still work fine.

There is no evidence at all that the operation of brains is dependent on anything other than the interaction of neurons, therefore if we can build accurate models of neurons and configure them the same as the original physical brain (equivalent to building a circuit from a diagram) then it WILL work unless brains depend on something beyond the interaction of neurons.

We do not need to know how to crack the neural coding problem (what patterns of spikes encode etc), only how to make our models behave the same as the real thing.

Claiming that uploading (modelling human brains on computers) will never be possible is claiming that we will never:

1 - Have a means of scanning human brains down to the synapse level and storing the data
2 - Have mathematical models of human neurons of high enough accuracy
3 - Have hardware capable of implementing those mathematical models at scale

There are about 100 billion neurons in an adult human brain (this of course varies a little, but for the sake of argument let's use it).

A modern i7 CPU has about 1.4 billion transistors

A data centre can easily fit 100,000 such CPUs - imagine if instead of traditional CPUs they were custom hardware with the neuron models directly implemented in the silicon instead of in software.

The problem with scanning is actually these days mainly an issue of data storage and not being able to extract synaptic weights and the latter problem has been partly solved already:
http://www.sciencemag.org/content/337/6093/437.abstract

As for data storage, it's theoretically possible to use a time/space tradeoff and do analysis of the scan data while saving the results directly to your models. Failing that, just throw hardware at it - storage tech is getting higher density all the time.

Actual analysis at scale of scans of human brain tissue to extract connectomes and weights is going to be a problem, but it still mostly boils down to "throw hardware at it".

This just leaves us with the question of mathematical models of neurons. Since neurons are physical objects which are bound by the laws of physics this one is just a matter of doing the hard work to refine the models.


tl;dr Lots more hardware resources and good old fashioned scientific investigation to refine the maths and uploading should become possible

Dale Carrico said...

"uploading (modelling human brains on computers)"

That is not how the term is actually deployed in the technno-transcendental discourses I critique. It is trivially true that brains, climates, orcs can be "modeled on computers." No shit, sherlock. That these activities can be edifying and even illuminating is not under discussion. That "uploading" can now or in any relevantly proximate construal represent any kind of techno-immortalization strategy or "soul-migration technique" is an incoherent utterance resting on ignorance, metaphorical trickery, wish-fulfillment fantasizing, flim-flammery, and pseudo-science. You are very confident that "we" are on the right track to blah blah blah blah. I would say that and a couple bucks can get you an iced coffee at Starbucks. But in our present debased epoch, clearly saying these sorts of things can get at least some of you robocultists guest spots on television and book contracts and corporate scratch. It's still bunkum for the reasons I have always said it is -- but by all means when you are a Robot God you can do your I-told-you-so-dance on my grave. (You won't, of course, you are going to die. And I will always go on thinking you are a very foolish fellow, indeed.)

Gareth Nelson said...

This is the point I actually agree with you on: if someone was to get a full accurate model of my brain running on a bunch of computers that'd be awesome but I would still be right to fear death if someone then pulled a gun out and pointed it at me.

"A picture of you is not you" as you've put it in the past.

Of course what is theoretically possible is using a brain model as a "cheat" to get AGI - once you've got an uploaded human mind you can start playing about with it and enhancing it to massively boost cognition, or in other words create a superintelligence.

Dale Carrico said...

Not only not "of course" for that last paragraph -- but all of that is complete stupidity six ways to Sunday. Lots of things are "theoretically possible" according to what little we know without knowing if we know what matters, or in a way that remains too developmentally remote to get exercised about, or involves too great an expense or too bedeviling a political quandary or who knows what else. Your model of aspects of the brain or brain activity is now an "uploaded human mind," I notice. With all the conceptual and metaphorical freighting that wee translation buys you though you are writing a check your ass can't cash. And now, as usual, talk arrives of "enhancing" (according to what standards? oh, never mind) the exhibition of "cognition" suddenly attributed somehow to this model? Wow, that's something. And now if we pretend all that isn't just a bunch of rhetorical sleight of hand -- zero scientific or "technical" content doing the work that matters there, by the way, precisely according to the complaint in the post -- you are "boosting" and, hell *Massively* so, this cognition. Blam, "superintelligence." So glib! So easy! And yet -- still as always exactly nothing. Enhanced nothing. Boosted nothing. Massively boosted nothing. Super nothing. Computer modeling can be diagnostically and therapeutically useful in specific contexts with serious people devoted to solving real problems. Futurological hyperventilating is horseshit and a distraction from real science, though I do not question it provides a nice line in soft porn to sell landfill-destined wham-o crap with. Be that guy. I can't stop you. You can't stop me exposing its moonshine.

Gareth Nelson said...

When people talk about uploading they are precisely talking about putting a human mind in a computer using a mathematical model of a human brain.

An uploaded human mind is a copy of a biological human mind (as "implemented" in nature using neurons) running on a computer (or several computers).

Since brains are what contain minds, a model of a brain will indeed also contain a mind. If my organic brain is able to read and respond to this post, an accurate model of my brain would also be able to read and respond to this post given the same sensory input and motor output.

I'm somewhat confused by your use of scare quotes around "cognition" - cognition is what brains do, do you think an accurate model of a brain could somehow be accurate while lacking cognition?

As for enhancement, well a few enhancements come to mind that could be done on an upload:

Speeding up the simulation - this may or may not be possible depending on implementation details, but if possible it'd allow doing more thinking in less time.

Tweaking the simulated neurochemistry so you can have identical effects to nootropic drugs and even the use of addictive drugs without addiction (you can simply reverse the adaptations).

Elimination of emotional states detrimental to the desired cognitive process - for example disabling boredom while using the upload to analyse data that traditional computers struggle with.

Redesigning the actual architecture of the virtual brain and eliminating bugs and flaws.

None of this is EASY and I never stated it is easy, but dismissing the concept as impossible without exploring it and without exploring the implications is downright silly. No progress ever happens if nobody looks beyond what we already have and ponders how to get to the next level of progress.

It is the job of philosophy and ideology to talk about whether we want to or should do something and the job of science to tell us whether it can be done.

Science tells us that with time and hard work we should be able to eventually build a system capable of emulating a human brain and you even agreed with me that brains can be modelled on computers. The very nature of computers also means this would allow certain enhancements to the resulting mind (and yes, an accurate model of a brain would copy the mind or it would not in fact be an accurate model).

Most people would agree that the use of computational modelling of brains (or parts of brains) for medical research into neurological disorders is a good thing.

Where we seem to disagree is that I (and many others) believe that we should work on developing this technology and using it for going beyond medical practice and into trying to enhance the human condition.

In other words, what we have here is an ideological disagreement, but you still keep couching it in terms of practical possibility.

Dale Carrico said...

"precisely talking about putting a human mind in a computer"

Yes, that is why they are stupid. The use of "precisely," there Gareth, really, is just too priceless. Although you want to pretend you are a super-scientist making Very Serious plans, I am afraid I do indeed know enough to have noticed that your "plans" are actually a bad poem. My dispute with you isn't just "ideological" in the restrictive sense you mean -- in which, presumably I am a luddite deathist too skeered to face your glorious future and am crying out for somebody to hold me -- but a dispute in which I point out that you are using words like "mind" and "enhance" and even like "in" in ways that don't accord with their usual meanings and in ways that are getting you mired in conceptual incoherence which does indeed suggest rather dire things about their "practical possibility" when it comes to it.

I'll allow your inevitable reply -- which appears to be sex life -- be the last word in this joyless exchange. I do wonder why you do keep pestering me, Gareth, when presumably you have so much HARD WORK you could be doing?

jimf said...

> None of this is EASY and I never stated it is easy, but dismissing the
> concept as impossible without exploring it. . .

There's some evidence that computer programmers are the wrong folks to judge
how easy the the "uploading" project (or even the "strong AI" project)
will turn out to be, and that they're not the folks best suited to "explore"
its feasibility or imminent likelihood. Computer programmers are too close to their favorite
toys. The same was probably true in earlier eras of the electrical engineers
who created the telephone network, and the mechanical engineers who harnessed
the steam engine. The late Gerald M. Edelman remarked, several
decades ago:

"[Are] artifacts designed to have primary consciousness...
**necessarily** confined to carbon chemistry and, more specifically,
to biochemistry (the organic chemical or chauvinist position)[?]
The provisional answer is that, while we cannot completely
dismiss a particular material basis for consciousness in the
liberal fashion of functionalism, it is probable that there will
be severe (but not unique) constraints on the design of any
artifact that is supposed to acquire conscious behavior. Such
constraints are likely to exist because there is every indication
that an intricate, stochastically variant anatomy and synaptic
chemistry underlie brain function and because consciousness is
definitely a process based on an immensely intricate and unusual
morphology"

-- Gerald M. Edelman, _The Remembered Present_, pp. 32-33

> you even agreed with me that brains can be modelled on computers

Internal combustion engines can be modelled on computers, too, but
nobody thinks they can be used to propel actual cars. Not even in
Second Life. ;->

> As for enhancement, well a few enhancements come to mind. . .

Yes, this is the science fiction angle. Entertaining enough, if you're
reading an Iain Banks "Culture" novel. Not to be confused with
proximate research outcomes, if you want to avoid bamboozling yourself.
The notion of a Vingean technological "singularity" that will bring about these
results automagically, along the way to some omega-point of
eschatological transcendence, as inevitably as a black hole sucking in a passing star,
and maybe even in the lifetimes of those now reading this if they
take the right vitamin pills, is most likely the result of wishful thinking,
not unlike the wishful thinking at the back of many earlier religions.
Whether those SFnal "enhancements" could ever come about, even assuming the time-scales
of an Olaf Stapledon novel, is simply not known.

"Unluckily, it is difficult for a certain type of mind to grasp
the concept of insolubility. Thousands...keep pegging away at
perpetual motion. The number of persons so afflicted is far
greater than the records of the Patent Office show, for beyond the
circle of frankly insane enterprise there lie circles of more and
more plausible enterprise, until finally we come to a circle which
embraces the great majority of human beings.... The fact is that
some of the things that men and women have desired most ardently
for thousands of years are not nearer realization than they were
in the time of Rameses, and that there is not the slightest reason
for believing that they will lose their coyness on any near
to-morrow. Plans for hurrying them on have been tried since the
beginnning; plans for forcing them overnight are in copious and
antagonistic operation to-day; and yet they continue to hold off
and elude us, and the chances are that they will keep on holding
off and eluding us until the angels get tired of the show, and the
whole earth is set off like a gigantic bomb, or drowned, like a
sick cat, between two buckets."

-- H. L. Mencken, "The Cult of Hope"

Meanwhile, you could just attend a ComicCon in the spandex tights of
your favorite superhero, or re-read _The Lord of the Rings_
and pretend you're an Elf.

Real science is rather more difficult, I'm afraid.

Gareth Nelson said...

"Computer programmers are too close to their favorite
toys"

Since we're talking about the feasibility of doing something with computers, at the very least you want input on the computer science side of things. Achieving uploading will be a result of multi-discipline work and one of those disciplines is indeed computer science and software engineering.

"The provisional answer is that, while we cannot completely
dismiss a particular material basis for consciousness in the
liberal fashion of functionalism, it is probable that there will
be severe (but not unique) constraints on the design of any
artifact that is supposed to acquire conscious behavior. Such
constraints are likely to exist because there is every indication
that an intricate, stochastically variant anatomy and synaptic
chemistry underlie brain function and because consciousness is
definitely a process based on an immensely intricate and unusual
morphology"

Are you claiming the way the brain works depends on chemical reactions that are immune from any kind of simulation with any technology we could develop?

Note that cognition is a separate matter from conciousness - I make no claim as to whether an upload would have qualia and subjective experience and personally believe that conciousness is something we won't truly understand for a long time due to the inherent metaphysical nature. Intelligence on the other hand is a problem that can yield to the scientific method.

"Internal combustion engines can be modelled on computers, too, but
nobody thinks they can be used to propel actual cars. Not even in
Second Life. ;->"

The thing is, a model of an internal combustion engine doesn't output actual physical movement - which is what engines are designed to do and their whole purpose.

An accurate model of a brain on the other hand would, by the very nature of being an accurate model, output the same signals as a real brain. If it did not output the same signals when given the same input it would not be an accurate model.

As for the enhancements I mentioned: I mentioned specific possible enhancements, and those were just off the top of my head. Just being able to implement the effects of nootropic drugs without tolerance for example, or eliminating the cognitive deficits that come from sleep deprivation or turning off boredom - all of these would vastly improve cognition and would be far simpler to achieve in an upload.

Gareth Nelson said...

You may notice that I use the terms "upload", "brain model" and "simulation" interchangeably - that's because they essentially all mean the same thing. If you run a mathematical model of a brain and it does not yield intelligence then your model is wrong and needs to be refined.

I am curious what sort of accurate brain model would NOT yield intelligence. Unless you're talking about modelling only parts of the brain (which has been done for various neural circuits) it seems to me that an accurate model of any system must yield the same behaviour as that system and this general principle includes brains.

Science is indeed difficult, and again you'll note that I have repeatedly stated none of this is easy - it's all very difficult indeed, but that is not the same as impossible. The comparison to perpetual motion machines is downright silly as the laws of physics quite strongly prohibit such devices. On the other hand no laws of physics prohibit intelligence (we're living proof of that) and there's no known reasons why biological intelligence is the only possible form - biology is not magical.

I find it amusing that you reference the lord of the rings and elves as in tolkien's mythology elves reincarnate after physical death due to their soul detaching from the body. The concept of a mysterious "soul" different from the brain is one that's still widely believed by a lot of people but as time goes on more and more evidence is showing that everything that makes us who we are is just in the brain. There's no need for souls, and I suspect that people who believe uploading to be literally impossible are still stuck in dualist thought.

Or in other words if either of us is pretending to have something in common with a tolkien elf, it's you.

Dale Carrico said...

Better an elf than a troll, dear.

jimf said...

> > . . .while we cannot completely dismiss a particular material
> > basis. . ., it is probable that there will
> > be severe (but not unique) constraints on the design. . .
>
> Are you claiming the way the brain works depends on
> chemical reactions that are immune from any kind of
> simulation with any technology we could develop?

Notice how we've elided the **digital computers** implicitly
under discussion to "any technology".

> . . . a model of an internal combustion engine doesn't output
> actual physical movement. . . An accurate model of a brain. . .
> would. . . output the same signals as a real brain. . .

Again, there's equivocation between the notion of "modelling"
in a practical scientific sense -- performing arithmetic
using a digital computer according to the dictates of a
mathematical formula that, within a circumscribed range of
inputs and with an understood degree of uncertainty, might
(it is hoped) shed light on some real-world phenomenon --
with the (science-fictional) notion of the wholesale
**substitution** of the "model" for the real thing (yes,
I've read Greg Egan's _Permutation City_ and other
SFnal explorations of the ideas, and I know about the
"simulation argument"). It's understandable, once again,
that computer folks are so taken with this, in part because
computers really can, in fact, **substitute** for other
computers. I've used VMware and I've played with simulators
for old mainframes and minicomputers.

> I suspect that people who believe uploading to be literally impossible
> are still stuck in dualist thought.

I've had this conversation before. There are people who think that
being doubtful of the notion that **digital computers**, as we know
and love them today, could (in some hypertrophied but essentially
finite-state-logical machinically similar form) could "simulate"
(in the full sense of "substitute for") a human brain, a human
being, a human society, or even **a whole universe** -- they think
that being doubtful of that notion, I say, is tantamount to
being a dualist. It ain't so. You can be a materialist and be
skeptical of what Jaron Lanier calls "cybernetic totalism" at one
and the same time.

jimf said...

> You may notice that I use the terms "upload", "brain model"
> and "simulation" interchangeably - that's because they essentially
> all mean the same thing. . .

A decade and a half ago, there was a silly thread (at least
I thought it was silly, even at the time) on the Extropians' mailing list
with the subject line "'analog computer' = useless hypothesis?".
I dipped my own oar into that thread, in fact:
http://extropians.weidai.com/extropians.2Q01/1799.html

I was reminded of that old conversation the other day while browsing
YouTube videos about "neuromorphic" computing and possible
directions for the next decade of computers now that
Moore's Law seems to be running out of steam. Such as this one:

Will Analog Computing and Neural Nets Power Innovation
after Moore's Law?
Doug Burger, Director, Client & Cloud Apps,
Microsoft Research
Jul 17, 2013
https://www.youtube.com/watch?v=dkIuIIp6bl0

This Microsoft talk amused me in part because, in contrast to the
usual SFnal notion of taking the (breathtakingly energy-efficient,
by contemporary or foreseeable digital computer standards) human brain
and "uploading" it into a digital computer, the Microsoft guy talked
about achieving future performance enhancements in computation by isolating
functional units in a computer program running on a conventional
digital computer and then "uploading" (or perhaps "downloading" ;->)
them into **analog** substrates that would "compute" the
same results much faster and/or with less energy expenditure,
at the cost of (more-or-less tolerable) increased noise and
reduction in precision. Sort of standing the conventional
turn-of-the-21st-century idea of uploading (as espoused back then
by Greg Egan fans and Extropians) on its head. I got a chuckle out of
that.

The "increased noise", BTW, may be an essential part of how
biological brains work, at least if you believe Gerald Edelman
and other "neural Darwinists".

But apropos Darwin, evolution, and noise, another Extropian,
Damien Sullivan, wrote (in 2001):

> I also can't help thinking at if I was an evolved AI I might not thank my
> creators. "Geez, guys, I was supposed to be an improvement on the human
> condition. You know, highly modular, easily understadable mechanisms, the
> ability to plug in new senses, and merge memories from my forked copies.
> Instead I'm as fucked up as you, only in silicon, and can't even make backups
> because I'm tied to dumb quantum induction effects. Bite my shiny metal ass!"

;->

Dale Carrico said...

It has always seemed to me that the dismissal as trivial of the actual material incarnation of actually-existing thought or of the actual material carrier of information by uploading enthusiasts is indebted to the very dualism they project onto their opponents. Of course, it is *because* I see no reason to be otherwise than materialist myself that I regard the uploading conceit as incoherent. The robocultic charges of dualism and vitalism always remind me of the homophobic sputterings of a closetcase, I always find myself clucking "Whatever, Mary" to myself when they go on about it.

Gareth Nelson said...

Neuromorphic computing is in fact one of the most promising technologies that could enable uploading, so bringing it up as some kind of argument against uploading strikes me as quite silly.

Neuromorphic computing is still computing - it's still an artificial construct that we can build.

"Again, there's equivocation between the notion of "modelling"
in a practical scientific sense -- performing arithmetic
using a digital computer according to the dictates of a
mathematical formula that, within a circumscribed range of
inputs and with an understood degree of uncertainty, might
(it is hoped) shed light on some real-world phenomenon"

The way models are used is they're used to test a hypothesis by setting up a model of the proposed mechanism of some phenomena and then seeing if the same behaviour results. If the same behaviour does not result, the model is flawed (i.e not accurate) and must either be refined or replaced.

If you build a model of a whole brain and it is accurate, that means it yields the same behaviour.

What is likely to happen in practice is models of single neurons and single brain regions will get increasingly more accurate. We'll likely see results from the openworm project at some point too, which will then be refined and more complex organisms will be modelled.

With openworm the plan is to claim success once the virtual worm yields all the same behaviours as a real physical c.elgans and that strikes me as a realistic goal with current digital hardware. Making the simulation more efficient is a matter of refining the implementation using a mix of new purpose-built hardware and good old fashioned software optimisation.

If we are able to pull this off for a worm then it's not too big a leap to say that it is at least in principle possible to do for more complex brains, refining the approach over time and eventually getting to the level where we can do the same for human brains.

There are of course some possible objections to the idea that make sense. For example, it may be that we're never able to get enough processing density to simulate a human brain in realtime due to latency in the connections between parts, or it may be that there are properties of human brains that are actually immune from scientific investigation but vital to functionality.

So long as a system is in fact physical though, it is possible in principle at least for a computer to simulate it in some form.

I think it is rather an extraordinary claim that human brains require something that is immune from mathematical modelling in order to function.

It is still a bold (though less so) claim that we will never practically implement such a model and i'm personally fairly optimistic that we'll be able to do so one day.

"The "increased noise", BTW, may be an essential part of how
biological brains work, at least if you believe Gerald Edelman
and other "neural Darwinists"."

I'd generally agree with this, but i'd disagree that this implies in any way that computers (and yes, I include alternative architectures such as neuromorphic chips in my definition of "Computer") can not also make use of or tolerate noise.

As for that last quote from Damien Sullivan, well i've always been of the view that uploading as an approach to AGI is a "cheat" - it's often useful as a thought experiment to show the idea of AGI is in fact possible but in practice it'd be better to implement AGI from the ground up rather than simulating brains.

"It has always seemed to me that the dismissal as trivial of the actual material incarnation of actually-existing thought or of the actual material carrier of information by uploading enthusiasts is indebted to the very dualism they project onto their opponents. Of course, it is *because* I see no reason to be otherwise than materialist myself that I regard the uploading conceit as incoherent"

Gareth Nelson said...

Dale - you've said (and I love this quote of yours) "a picture of you is not you", which is entirely true and I do not claim it is at all possible to "transfer" conciousness from a brain into a computer.

But if we stick with the picture analogy, I would argue that a copy of a picture is still as useful for the same purposes. When we look at a beautiful work of art, we derive pleasure from appreciating the skill of the artist, that is the primary purpose of a work of art. We can look at a copy of the art and assuming the copy is of high quality we can use it for the same purposes - appreciating the beauty.

If we say that the purpose of a brain is to yield intelligent behaviour (and secondly to control a body - but we generally value people for what's in their cerebral cortex, not their brain stem) then a copy of a brain that yields intelligent behaviour serves the purpose just fine, at least for other people.

If we could copy the brains of great people (i'm sure we can all think of some historical figures that would suit that definition) and "run" the copies and communicate with them then that would serve a useful purpose at least for us.

Of course, for me myself I would rather survive and copying my brain into a computer won't help me there at all. That said, if I was about to die and we had the means to copy my brain and people wanted it, i'd consent to it, I just wouldn't expect to wake up later.

If you're a true materialist you accept that the brain is just a physical object with some complex chemical and electrical processes being responsible for its behaviour. It stands to reason that modelling those processes accurately should allow the same behaviour.

You could get really silly and claim that a model of a human brain which "seems" intelligent is actually just simulating intelligence and the model is just accurately predicting the behaviour of a human brain and outputting that behavioural prediction, but then you're just arguing semantics.

When it comes to intelligence, there is no difference between an accurate simulation of intelligence and actual intelligence. If the simulation does not behave in an intelligent manner, it is not accurate.

Feed sensory input into a physical brain and it outputs nerve impulses to muscles.

If you feed sensory input that represents someone punching me in the face into my brain, then my brain will output nerve impulses to muscles that cause my lips to move to say "ow, wtf?" and my arms and legs to make me move away (or hit back, whatever).

If you fed that same sensory input into an accurate model of my brain then it would output the same motor commands.

If you gave an IQ test to me and then gave another one to a model of my brain with the same internal state then it would get the same results.

The only way an accurate model of my physical brain would NOT yield the exact same behaviour would be if one of the following is true:

1 - It is not in fact an accurate model
2 - My behaviour, including my intelligence, is due to something non-physical and dualism is true

Materialists must reject the second, which leaves only the question of whether we can build an accurate model. I claim that it's possible at least in principle and there's good evidence that we will eventually achieve it in practice with hard work.

Dale Carrico said...

I would argue that a copy of a picture is still as useful for the same purposes. When we look at a beautiful work of art, we derive pleasure from appreciating the skill of the artist... assuming the copy is of high quality we can use it for the same purposes - appreciating the beauty.

Setting aside the obvious fact that collectors spend millions for originals while disdaining reproductions for reasons that are not entirely dismissable as snobbery, I have no objection to the fact that some people might want to believe they get the same value from a recent digitally animated Aubrey Hepburn selling a candy bar as they do from her actual performance in Sabrina, I have no objection to some perv who wants to believe his blow up fuck doll provides as rich a relationship as a he is capable of enjoying with a human partner, I have no objection to somebody who wants to believe that they make some profound connection with the Great Emancipator via his stiff animatronic duplicate in Disney World's Hall of Presidents. Hey, there's no accounting for taste.

"If we say that the purpose of a brain is to yield intelligent behaviour (and secondly to control a body - but we generally value people for what's in their cerebral cortex, not their brain stem) then a copy of a brain that yields intelligent behaviour serves the purpose just fine, at least for other people."

I'm an atheist so I don't believe the brain exists for a "purpose" in the way you seem to mean. This is not a quibble, because the theological framing here already figures intelligence as purposively designed in a way that smuggles your erroneous conclusions into your framing of your position in that very dispute. You second framing of the brain as "controlling" the body is also considerably more problematic and prejudicial than you seem to realize. The brain IS the body, not a separate or superior supervisor of it. There is a whiff here of the very dualism you falsely attribute to opponents of your faith-based formulation of the "info-soul." I also think this business of introducing "control" into the picture so early is rather symptomatic, but we needn't go into that. I do hope you see a therapist on a regular basis.

Dale Carrico said...

"If you're a true materialist you accept that the brain is just a physical object with some complex chemical and electrical processes being responsible for its behaviour. It stands to reason that modelling those processes accurately should allow the same behaviour."

Not only does this assertion not "stand to reason" but it is a patent absurdity. I am a materialist in the matter of red wagons, but I hardly think a computer modeling a red wagon would be one, even if it might generate an image I would recognize as the representation of one. Not incidentally, I do not agree that we know at present that the material processes that give rise to the experience of thought are reducible to only those chemical and electrical processes we know in the way we know them. They certainly might, but our present accounts are not sufficient to pretend we know for sure. There is no need to invoke supernatural phenomena to recognize the highly provisional status of much of our present understanding of brain processes and to treat grandiloquent extrapolations from our present knowledge onto futurological imagineering predictions with extreme skepticism and their confident proponents as ridiculous.

"You could get really silly and claim that a model of a human brain which "seems" intelligent is actually just simulating intelligence and the model is just accurately predicting the behaviour of a human brain and outputting that behavioural prediction, but then you're just arguing semantics."

You are acting as though AI or simulated apparent persons are actual accomplishment, not futurological fancies, and that my skepticism about their realization given the poverty of our understanding is some kind of a denial of facts in evidence. It is not silly nor merely semantic for me to point out that AI is not in evidence, that AI champions are always certain it is around the corner when it serially fails to arrive, that our understanding of intelligence is incomplete in ways that seem likely to bedevil the construction of actually intelligent/agentic artifacts, and that AI discourse and the subcultures of its enthusiasts have always been and remain indebted to pathological overconfidence, uninterrogated metaphors, troubling antipathies to materiality and biology, sociopathic aspirations of mastery, control, omniscience none of which bode well for the project to which they are devoted.

jimf said...

> Neuromorphic computing is in fact one of the most promising
> technologies that could enable uploading, so bringing it up
> as some kind of argument against uploading strikes me as quite silly.

I only mentioned the term (and I probably shouldn't have -- it was
a distraction) because it happened to be the search term that dragged up
the Microsoft talk on YouTube that contained my primary illustration.

The thing I found amusing about that talk was that **analog electronics**
were being discussed (once again after more than half a century)
as a possible way around the limits of **digital**
computation that seem to be fast approaching. And that 15 years
ago among the heavy-breathing Extropian crowd, analog was
a dirty word -- fie upon "analog"; **digital** was where it was at.
In fact, the author of that Extropians' thread was attempting to convince
himself that "analog" **anything** is an illusion -- he apparently
wanted to believe the notion that existence itself is, at bottom, digital.
Which is, as a friend of mine once said, very much a "party question"
(by which he did not mean a question popularly discussed at parties;
he meant an **ideological** -- as in political party -- article
of faith as much as a scientific hypothesis ;-> ). And in fact the
Damien Sullivan quote gives the (unstated) reason for the strong desire to
believe in the digital basis of reality -- that rejecting it would seem to
put a damper on the anticipated party (in the other sense) at the end of time
predicated on the assumed continued exponential development
of digital computers extrapolated on the basis of recent
decades' success of Moore's "Law".

The whole thrust of our disagreement (or talking past each other,
or whatever it is) is **not** whether intelligence (or life itself)
has a material basis -- I believe it does as much as you seem to --
but whether the faith in **digital computers** to recreate
(let alone improve upon) these phenomena is justified. Let alone
whether it's going to happen "real soon now". I know that's
been an article of faith on both coasts (MIT and Silicon Valley)
since the days when Arthur C. Clarke and Marvin Minsky were in their
primes.

> So long as a system is in fact physical though, it is possible in
> principle at least for a computer to simulate it in some form.

But the "computer" that accomplishes that (if ever) might not
turn out to look much like what is now popularly meant by the
word "computer". And we don't really have much of a clue, right now,
what that thing might turn out to be, whether it will ever
be practically possible, or how long it will take to invent it.

> i've always been of the view that uploading as an approach to
> AGI is a "cheat" - . . . in practice it'd be better to implement AGI
> from the ground up rather than simulating brains.

Well, hope springs eternal. "We don' need no steenkin' brain" has
been the mantra of the artificial intelligentsia right from the
beginning. We just need some clever short-cut algorithms, and we can skip over
the biological messiness. Well, they (Minsky, Clarke, et al.)
thought it would certainly have happened by now, back when I was a kid.
And it was reading Gerald Edelman's _Bright Air, Brilliant Fire_
back in 1992 that sowed my own seeds of doubt about that project.
Before Edelman, I wouldn't have bothered to read Hubert Dreyfus.
Since then, I have.

Gareth Nelson said...

"Setting aside the obvious fact that collectors spend millions for originals while disdaining reproductions for reasons that are not entirely dismissable as snobbery, I have no objection to the fact that some people might want to believe they get the same value from a recent digitally animated Aubrey Hepburn selling a candy bar as they do from her actual performance in Sabrina"

Of course there is value in physical originals on an emotional level, I myself commonly pay for bluray/DVD copies of films I love even when I can download or stream them digitally and I love going out to a gig even though I can listen to music at home. This is all besides the point of my analogy though - my point was that a copy can serve a lot of the same purposes as an original. If I simply want to appreciate and enjoy a beautiful work of art I can do so with a copy, even if it'd be nice in a sentimental/emotional way to possess the physical original.

If I want to listen to a piece of music solely for the purpose of enjoying the sound of the music itself, a high-quality recording is just as good as having the musician perform it for me live. If I want to enjoy the experience of standing in a concert venue or if I want the excitement of a mosh pit or want to shake hands with the band members then I can go to the physical location, but to simply enjoy the sound of the music itself, I can use a sufficiently high quality copy.

"I'm an atheist so I don't believe the brain exists for a "purpose" in the way you seem to mean"

I didn't literally mean purpose, substitute "function" if you wish. Just as the function of the heart is to pump blood around the body, the function of the brain is to control the body.

"The brain IS the body, not a separate or superior supervisor of it. There is a whiff here of the very dualism you falsely attribute to opponents of your faith-based formulation of the "info-soul.""

It'd be more accurate to say the brain is PART of the body, and it most certainly controls the rest of the body. The reason my fingers are moving on my keyboard right now is because my brain is sending the muscles in my hands signals and causing them to move in response to my thoughts and my thoughts are caused by (or more correctly simply ARE) patterns of activity in my brain.

"I also think this business of introducing "control" into the picture so early is rather symptomatic, but we needn't go into that. I do hope you see a therapist on a regular basis."

Are you trying to imply that merely using the word "control" to describe the brain is indicative of psychological problems? That is rather odd to say the least, and a little insulting.

"I am a materialist in the matter of red wagons, but I hardly think a computer modeling a red wagon would be one, even if it might generate an image I would recognize as the representation of one"

The essential function (or purpose) of a red wagon is to transport you from A to B, not to look like a red wagon. That is what we are most interested in.

Gareth Nelson said...

If we could build a model of a red wagon that actually somehow physically transported us then it would basically be just as useful as a real red wagon. Of course this is rather nonsensical as a mathematical model alone could not yield the behaviour we desire in this situation.

On the other hand, with brains we are interested in intelligent behaviour, and a mathematical model alone could in fact yield intelligent behaviour if it is accurate enough.

"I do not agree that we know at present that the material processes that give rise to the experience of thought are reducible to only those chemical and electrical processes we know in the way we know them. They certainly might, but our present accounts are not sufficient to pretend we know for sure"

I agree it may turn out there's something we don't yet know about the brain and might never discover, if that's the case then of course uploading will forever remain a dream. Of course modern neuroscience keeps on discovering more and more about how brains work and to date it seems clear they operate on a combination of electrical impulses and responses to chemicals in the environment. Psychoactive rugs seem to be decent evidence for this - most of them operate by binding to neurotransmitter receptors in various ways. We've even designed drugs for particular purposes by exploiting this knowledge.

"You are acting as though AI or simulated apparent persons are actual accomplishment, not futurological fancies"

What I was saying is that it'd be semantics to argue that a simulation of a brain is not actually intelligent, even if it does all the same stuff that an actual physical brain does.

Every recording in my music collection is a copy of the original master held by the record label (or in some cases by the indie artist). I still get the exact same practical benefits from listening to that copy as I would if I listened to the master.


I remember an old friend of mine who died, he was an insanely talented coder specialising in 3D rendering and could do amazing things. For a while I was working with him on a project that I had to cancel for obvious reasons. He also had a wonderful sense of humour. If I could have my friend back somehow, i'd get back that wonderful sense of humour and that amazing talent. If I had a copy of his brain and it was hooked up correctly i'd still get that sense of humour and that talent so long as the copy was of sufficient quality.

Yes, it wouldn't "really" be having my friend back, but it'd serve similar purposes in at least some ways.

jimf - I believe you've mistaken my claim for a different one - I have never claimed that traditional von neumann digital machines will be sufficient for realtime simulation of human brains. Although theoretically if you have enough memory they can in fact compute any computable problem, and I believe simulations of brains are computable, the question boils down to whether it's practical to do in realtime and if you take that into consideration it's likely we'll need a different form of computing. You're quite right that it'll look very different from what we have now if we do pull it off, and it might take a long time - but it is at least in principle possible.

As for your last paragraph, i've always said that a good AGI design should use a hybrid approach, using biologically-inspired neural nets where appropriate and more traditional linear software algorithms where they are more appropriate. Like uploading, a "from the ground up" design is a large and complex project, but it does not strike me as actually impossible.

Jean Diogo said...

"You may notice that I use the terms "upload", "brain model" and "simulation" interchangeably - that's because they essentially all mean the same thing. If you run a mathematical model of a brain and it does not yield intelligence then your model is wrong and needs to be refined."

You are delusional, Gareth. There is not a single mathematical model becoming something real anywhere in the world. The most complete mathematical model of a natural phenomenon is nothing more than a mathematical model. A differential equation describing the movement of planets is not a planet moving. An algorithmic simulation of a growing city is not a city. This is a thing so obvious that it makes me wonder how can it be that serious scientists let themselves be driven by their childish dreams of omnipotence so often! And how can it be that those supposedly skeptic scientists convince themselves that a computer simulation of a theoretical model of the brain will become conscious just because "ultra super duper powerful computer", when you cannot even define "conscious". So much rolling eyes for you techno-fanatics.

Dale Carrico said...

What I was saying is that it'd be semantics to argue that a simulation of a brain is not actually intelligent, even if it does all the same stuff that an actual physical brain does.

But none of that exists in the world, Gareth, the world in which this argument is happening. You can chide my lack of faith from Holodeck Heaven when you and the rest of the robocultists can actually cash the checks your asses keep writing. I'm not holding my breath -- if only because it's so hard to hold one's breath and laugh at the same time.

Gareth Nelson said...

Jean - if a model of a brain does not behave in an intelligent manner it is not in fact an accurate model.

Please explain to me how an accurate model of a brain could NOT be intelligent. I don't care about conciousness or qualia, i'm just talking about intelligence.

If you built a model of planets moving, but when you observed the output you noticed your virtual planets did not actually orbit the sun then i'd say your model is flawed.

If you build a model of a brain and when you hook it up to a virtual body or a robot and it does not actually behave in just as intelligent a manner as a real brain then i'd say your model is flawed.

The concept of a simulation of intelligence that isn't actually intelligent makes no sense at all since an accurate simulation of intelligence will respond in an identical manner to a "real" intelligence.

Let's try a thought experiment: I assume you grant that I am in fact intelligent, just like any other human being. You may disagree with my opinions but I would hope you grant they do in fact come from cognitive processes and intelligence.

Since you are only reading my words in the form of blog comments, if we already had the technology to model my brain and used such a model to write these comments how would you be able to tell that it was a model and not a real human?

I claim you'd not be able to tell, because the end result would be the same.

Let's take this further. Imagine we already have this technology and we further have a means to interface it with a human nervous system. While a friend or relative of yours (someone you know closely) is sleeping, their brain is copied perfectly into a computer model and their biological brain is swapped for a computer inside their skull. Without looking inside their skull, what test of intelligence can you devise that they would fail and that the original brain would not fail?

Any test of intelligence that a biological brain passes, an accurate model of a brain would also pass.

Dale - I don't think i've ever claimed (nor has anyone else with any understanding) that we already have such simulations, my only claim is that they are in fact possible, or at least possible in principle.

jimf said...

> You can chide my lack of faith from Holodeck Heaven when you and
> the rest of the robocultists can actually cash the checks your asses
> keep writing.

And it is of course the **faith** (in the full, religious sense of
the word) aspects of this discourse that make it so resistant to
the skepticism that a scientific attitude normally encourages.

And some folks have capitalized on this faith to become Gurus of
the Singularity, bringing along disquieting echoes of L. Ron Hubbard
and earlier figures who have taken advantage of the popular science
of their times to whomp up new religions.

As a recent article on an ex-Mormon Web site puts it:

http://zelphontheshelf.com/prestigious-prophet-or-common-cultist/
----------------------
Over the years, I’ve done a fair amount of research about various
cult leaders. As I have read articles and watched documentaries on
people like Jim Jones, L. Ron Hubbard, and David Koresh (not to mention
offshoot Mormon leaders like James Strang, Jim Harmston, and
Warren Jeffs) I have been impressed by the number of commonalities
in all of their stories. It’s really almost as if they are all
playing their own variations of the same tune. . .

. . .

Conclusion

As I mentioned in the beginning, these are actual characteristics
prevalent among cult leaders. Does Joseph fit the bill? All I can say
is something my seminary teacher once told me: “If it walks like a duck,
quacks like a duck, and looks like a duck, it’s usually a duck.”
======

Then, of course, there's the politics of all this. . . ;->

Dale Carrico said...

I assume you grant that I am in fact intelligent

You are making this more and more difficult. Although robocultists regularly use the word "upload" to describe "info-soul" (a notion both muddy and incoherent) migration (a metaphor pretending to denote engineering) from bodies into code, you insist you mean by it something more modest -- which I'm not sure we should care about since nobody I criticize shows the same restraint and you yourself seem inclined to slip into the usual discourse yourself all your protests to the contrary notwithstanding. I don't think you are using even words like "model" and "accurate" in the usual ways. This is what comes of spending too much time in a marginal sub(cult)ure, I'm afraid.

I don't think i've ever claimed (nor has anyone else with any understanding) that we already have such simulations, my only claim is that they are in fact possible, or at least possible in principle.

The claim is implicit and performative abd may also be unconscious. You *regularly* assume the smug air of a resident in tech-heaven all of whose futurological faith has been rewarded by his sooper-friends and Robot God -- and skeptics are revealed to have been ignoramuses and cowards and fools who failed to see the power of your vision. You declare my criticisms "semantic" and "ideological" as compared with your "technical" and "scientific" discourse -- even as I insist on modesty and qualified claims compatible with the actual state of our knowledge and skepticism about grand predictions coming from people who have never been anything but wrong and who exhibit rashness, megalomania, marginality from scientific consensus, credulity bound up with palpably irrational fears of death and contingency as well as irrationally exuberant wish-fulfillment fantasies of immortality, omnicompetence, and wealth beyond the dreams of avarice. I'm turning the channel on your infomercial, I'm not answering the door to the evangelist with the pamphlet.

Jean Diogo said...

Gareth,

You "don't care about consciousness", but the goTo's you perform from "simulation" to "mind uploading" and "swapping brains" are a total non sequitur. Show any example in history of a 100% effective simulation or a complete model of anything that exists.

As an AI researcher, I believe in the democratization of scientific progress as the highway to improve human condition. And I have no problem at all with Turing test as a parameter to judge the effectiveness of a computer system. That doesn't mean I endorse the religious belief that a mind could exist outside of a body.

Replacing a person's brain by a machine or "replicating" one's brain via software has nothing to do with extending one's life. It has to do with killing. And believe me on that, a group of scientists blinded by ideological delusions would kill people on behalf of their faith.

Doesn't matter how much your computer simulation behave like water, you will never be able to drink it as it was water. And algorithmic food would never provide the nutrients your body needs. It doesn't matter how capable a computer system is: still not a biological thing, even less a person.

(Sorry about the speech barrier and the rudeness. I just like to argue even in a foreign language. Salute.)

Gareth Nelson said...

"nobody I criticize shows the same restraint and you yourself seem inclined to slip into the usual discourse yourself all your protests to the contrary notwithstanding"

The first part of this article makes a decent argument that uploading is not the same as transfer of conciousness (the author later tries to claim that "gradual uploads" are different - something i'm not convinced of myself):
http://hplusmagazine.com/2013/06/17/clearing-up-misconceptions-about-mind-uploading/

"You "don't care about consciousness", but the goTo's you perform from "simulation" to "mind uploading" and "swapping brains" are a total non sequitur. Show any example in history of a 100% effective simulation or a complete model of anything that exists."

To be clear, when I talk about consciousness i'm talking about subjective experience and qualia - these are things that ultimately I can't even prove other humans have. You can't prove that other humans have it either, not 100%. This is the hard problem of consciousness.

Of course we need not care about it, because what we're talking about is intelligence and they are not the same thing.

"That doesn't mean I endorse the religious belief that a mind could exist outside of a body."

It strikes me as more of a religious belief that a mind can only exist in the form of a biological brain, as this implies something magical about brains.

"Replacing a person's brain by a machine or "replicating" one's brain via software has nothing to do with extending one's life. It has to do with killing."

Of course replacing someone's brain with a machine would kill them - but that's besides the point I was making. My point was that you would not be able to tell assuming the machine accurately copies the person's behaviour in every way.

"Doesn't matter how much your computer simulation behave like water, you will never be able to drink it as it was water. And algorithmic food would never provide the nutrients your body needs"

This analogy is such a bizarre idea to me. The purpose of food is to provide nutrition and for that you need physical matter. The purpose of intelligence is to respond in an intelligent manner, and for that you need intelligent behaviour.

I sometimes play old computer games in emulators. I no longer own an original playstation or a super nintendo, but I am still able to replicate the purpose of that original hardware via simulation - playing games. It does not matter that it's "only" a simulation, because the simulation still performs the same purpose as the original. It doesn't even matter if the simulation is not 100% perfect internally and a few tricks are used to speed things up, so long as the end result is the same.

A simulation of a brain would still perform the same function as the original brain for the purposes of yielding intelligent behaviour. Even if internally it wasn't actually a pile of biological neurons but was instead a pile of silicon chips, if the same end result occurred then it would not matter.

Gareth Nelson said...

Come to think of it, a pacemaker is a brilliant analogy: a pacemaker is based on a mathematical model of the pulses used by the brain stem to keep the heart beating. Modern pacemakers even use software to determine the best frequency and other parameters to try and replicate the malfunctioning brainstem output.

It's "just a model", but it still actually works.

Hippocampal prosthesis chips also come to mind.

Dale Carrico said...

this article makes a decent argument

You don't address objections you just recommend pieces of religious literature. Just so you know, I read Moravec's "Pigs in Cyberspace" when Extropy was a paper zine. I have read more variations on the uploading thesis than you seem capable of grasping. I am not ignorant of your pet formulations -- I reject them as incoherent, pseudo-scientific, and pathological for reasons I have stated and to which you do not respond.

when I talk about consciousness i'm talking about subjective experience and qualia - these are things that ultimately I can't even prove other humans have. You can't prove that other humans have it either, not 100%. This is the hard problem of consciousness.

Maybe if you are a sociopath. Your communication is a performative denial of the substance of your own claim, the pragmatics of proof also presupposes what you claim is unprovable. To be clear, this is not the opening gambit for a deep philosophical exchange but a testament to my strong suspicion that you are not up to one.

a religious belief that a mind can only exist in the form of a biological brain, as this implies something magical about brains

One can easily concede the possibility in principle that phenomena legible as "minds" might be instantiated on non-biological structures while at once taking seriously that all consciousness properly so called has always been biological, that our understanding of consciousness and intelligence as phenomena is conspicuously incomplete, and that believers in the program of building artificial intelligence as a cohort often exhibit overconfidence incompatible with their history of failure, rely on reductive understandings of mind that have gotten them nowhere for good reason, regularly exhibit pathological hostility to the biological incarnation of mind and sociopathic hostility to the social performance of intelligence. You can dismiss those who don't ascribe to the faith-based initiative of good old fashion artificial intelligence and its digital-utopian, cybernetic totalist, singularitarian and techno-immortal variations as religionists if you like, if that helps you sleep at night, but it isn't exactly hard to discern the religiosity of GOFAI ideology.

you would not be able to tell assuming the machine accurately copies the person's behaviour in every way

Consciousness and intelligence have subjective, objective, and inter-subjective dimensions in which they are substantiated -- when you claim your ideal machine "copies the person's behavior in every way" you are presuming the machine is physically indistinguishable from the person and would be so for a physician or a lover? you are proposing that the narrative continuity of this person would be subjectively and objectively coherent -- for instance, nobody would have witnessed the death and replacement of the person by a machine? Quite apart from the fact that none of this remotely accomplishable and so there is no reason to regard any of this as relevant to pubic policy or investment or as anything but a distraction from actually urgently relevant questions and problems (some of them related to computation and networks), to be honest your thought experiment seem to rely on a premise of indistinguishability which either disregards as irrelevant differences that actually make a difference to anybody who isn't a sociopath or which sets such a high bar for indistinguishability that it isn't clear why it wouldn't be pathological to claim the person in question had been "replaced" by a "machine" in the first place.

jimf said...

> It strikes me as more of a religious belief that a mind can only exist
> in the form of a biological brain, as this implies something magical
> about brains.

It strikes me as more of a scientifically sober recognition of the
gulf that currently exists between the complexity of any man-made
artifacts and the complexity of biological systems. You don't
have to think there's anything "magical" about brains to acknowledge
that they're: 1) extremely complex and 2) not all that well
understood, at present.

On the other hand, it **does** strike me as more of a religious belief,
and one which smacks of magical thinking, to hand-wave about
the "exponential acceleration" of computer technology, or technology
in general, and get all breathless about the imminent arrival
of uploads and cyber-immortality, or start worrying about ex machina
terminators (whether in the form of Arnold Schwarzenegger or
Alicia Vikander ;-> ).

> I sometimes play old computer games in emulators. I no longer own an
> original playstation or a super nintendo, but I am still able to replicate
> the purpose of that original hardware via simulation - playing games.
> It does not matter that it's "only" a simulation, because the simulation
> still performs the same purpose as the original

Yes, as I mentioned before, using a current computer to simulate
an old one is the one salient example of a "computer model" of
something being able to successfully **substitute** for the
thing being "modelled". However, in this case, the thing
being simulated and the thing doing the simulation are both
similar sorts of things, they're both human-designed artifacts,
and they're both more-or-less well understood. Though even so,
it can be quite an adventure to reverse-engineer poorly-documented
details of an old computer system by inference from the
surviving examples of an operating system. Multics is just now
being resurrected, you know. ;-> . Also, though they're
similar sorts of things, the current machine running the simulator
is always vastly more capable than the older machine being simulated,
thanks indeed to Moore's Law. It might have been possible **in principle** to simulate
a Multics machine on a 1973-vintage Intel 8008, but no such
effort would have been likely to succeed.

Dale Carrico said...

A pacemaker is not a model, it is a device. A model of a pacemaker in a pacemaker-designer's CAD-CAM rig is a model. Note that a computer model of a pacemaker isn't a pacemaker, even a digital pacemaker simulating its behavior in a simulated human body isn't a pacemaker, and it can't do what a pacemaker does.

Gareth Nelson said...

"You don't address objections you just recommend pieces of religious literature. Just so you know, I read Moravec's "Pigs in Cyberspace" when Extropy was a paper zine."

I was responding to your claim that other transhumanists don't state the same thing: the idea that uploading is possible, but is not a form of survival.

"Maybe if you are a sociopath. Your communication is a performative denial of the substance of your own claim, the pragmatics of proof also presupposes what you claim is unprovable"

To be clear: the hard problem of consciousness has not yet been solved by anyone. There is no 100% proof that other people are in fact capable of qualia and subjective awareness. You can't ever prove 100% that other people actually have subjective awareness, and stating this fact does not make one a sociopath.

It is related to "the problem of other minds", and you do not need to be a sociopath to recognize that it is in fact a problem. It's even quite likely (just not 100% certain) that other human brains have subjective awareness and you should behave as though they do.

"One can easily concede the possibility in principle that phenomena legible as "minds" might be instantiated on non-biological structures while at once taking seriously that all consciousness properly so called has always been biological"

I'm not one to claim that we'll get human-level AI any time soon, or that it's easy - nor do I claim it already exists in any form (except perhaps certain projects which might eventually lead to it, if that counts).

As for the thought experiment I proposed, I am stating that if the machine copied behaviour in every way it would be impossible to tell the difference from the outside. A lover would not be able to tell (unless your idea of sex is performing MRI scans of course), a doctor MIGHT be able to tell if they perform the right tests, but that is not relevant to the idea under discussion.

If this could be done, then yes the original person would be dead, but would others around them notice? The person's own experience would indeed end, but assuming the process is not witnessed, how would others tell?

Let's boil it down to this question:
What test of intelligence can you devise that a real biological brain would pass which an accurate simulation of that same brain would fail?

This is why the "virtual food can't feed you" analogy is nonsense - the test of whether food is nutritious enough is to eat it and see if you survive without health problems and of course simulated food can't be eaten. A simulated brain on the other hand can in fact perform cognitive tasks.

Gareth Nelson said...

"A pacemaker is not a model, it is a device. A model of a pacemaker in a pacemaker-designer's CAD-CAM rig is a model. Note that a computer model of a pacemaker isn't a pacemaker, even a digital pacemaker simulating its behavior in a simulated human body isn't a pacemaker, and it can't do what a pacemaker does."

A modern pacemaker has a microcontroller running software that models what the brainstem does in order to replace the functionality that in the biological system has malfunctioned.

All software is mathematics, the fact you can use the software to trigger physical events does not mean that somehow it's no longer a mathematical model.

jimf - The thing is, we're not talking about the equivalent of modelling a multics machine on an 8008, we're talking about the equivalent of running a PDP-11 on a modern i7 (or indeed, my example of a SNES or PS1 emulator). If something is possible to simulate in principle then it becomes a question of "can we do this in practice?" and then "can we do this in realtime in practice?"

jimf said...

> for instance, nobody would have witnessed the death and
> replacement of the person by a machine?

In the original movie of _The Stepford Wives_ (the one with
Katharine Ross and Paul Prentiss; I haven't seen the
newer one with Nicole Kidman and Bette Midler) only the
wife being replaced gets to witness the moments leading
up to her own murder by her robot replacement; the wicked
husbands presumably get to indulge the fantasy that they're
still living with their original wives, only, you know,
"improved". ;->

Dale Carrico said...

What actually interesting or relevant concern is supposed to be illuminated by our meditation on this imaginary artifact you have described so exactingly but that you don't claim exists or is likely so to do any time soon?

Gareth Nelson said...

This is what I hope to demonstrate:

The claim that a copy of a brain running inside a simulation/model (same thing) would not be intelligent is quite obviously false.

The analogy to "you can't eat virtual food" or "a model of rainfall won't get you wet" is irrelevant, because the purposes are different.

No test of intelligence could be devised that a biological brain would pass which a simulation of that same brain would fail.

Dale Carrico said...

A pacemaker is not reducible to software, and software is not mathematical spirit stuff but is instantiated on a material carrier.

Dale Carrico said...

I have often contemplated the Stepford analogy to our upload enthusiasts, wistfully so given the lack even of camp value among the usual techno-immortalist suspects. Has La Vita More provided any unconscious camp transhumorist stylings on the topic I wonder?

Gareth Nelson said...

The pacemaker models the signals sent from the brainstem. The heart muscles don't care whether the signals come from the brainstem or from the pacemaker. The fact the frequency and other parameters came from a microcontroller rather than a bunch of neurons is irrelevant, all that matters is the final output - a pulse of electrical current that the heart muscles respond to.

Software itself is indeed mathematics and mathematics is in fact not a concrete physical thing. If I write a piece of code, it's stored as a binary representation on my harddrive, but the algorithms themselves are abstract things. Bit irrelevant to our current discussion of course.

The point is, mathematical models that replace biological functions already exist in a simple form and are widely used.

Dale Carrico said...

The work done by the word "model" for you is by now deliriously ramifying. These sorts of free associational slippages are commonplace in faith-based system eager to sanewash their zanier articles of faith. Your reassuring declarations that none of this AI or uploading business is remotely real or even in the offing are hard to square with the light in your True Believing eyes as you announce your plans and declare your march of history underway and wave away all real-world irrelevancies as obvious nonsense. Presumably, you have been sufficiently bolstered in your resolve by this joyless ritual swapping of pieties that you can set aside this incessant pestering of me (as though you were actually open to criticisms) and get on with the Very Serious and Very Hard work of demonstrating your luminous vision by actually building your Robot God or whatever you think you are doing. You aren't going to convince me to join your silly robot cult and if you are trying to convince yourself the better route would be through therapy.

jimf said...

> The work done by the word "model" for you is by now deliriously ramifying. . .
> You aren't going to convince me to join your silly robot cult and if you are
> trying to convince yourself the better route would be through therapy.

To be fair to Mr. Nelson, this particular True Believing light
is now to be found in the eyes of folks -- not just computer programmers,
apparently -- you might think would know better. Henry Markram is
one obvious example -- he is a genuine neurologist according
to his Web CV, and presumably knows all about the structural and
biochemical complexity of the subject of his research.

I was browsing in the Harvard Book Store in Cambridge MA yesterday,
and thumbed through a copy of _The Brain: The Story of You_ by
David Eagleman. He's the telegenic host of a PBS TV series of
the same name, and I gather that the book is a companion volume to
the TV show. I was amused to see that in the last chapter
he tips his hat to the >Hist agenda:
http://www.pbs.org/the-brain-with-david-eagleman/episodes/who-will-we-be/

And, of course, none other than the late Gerald Edelman, at the
Neurosciences Institute at La Jolla, presided over the construction of
a series of "brain-based" or "noetic" devices intended to model various
aspects of his own theory of neural Darwinism:
http://www.nsi.edu/~nomad/

Markram, I gather, has been accused of taking PR advantage of equivocal
interpretations of the goals of the Human Brain Project -- he might,
when pressed, have admitted that his computer programs are limited-scope
tools of research, but much of the reportage surrounding the effort
seems to have been fueled by rather more grandiose expectations.
That equivocation may have backfired on Dr. Markram at the end.

I think computer simulations of neurons, neural networks, automobile
engines, nuclear physics, climate change, or colliding galaxies
are all just fine, if they turn out to be useful in some way.
However, jockeying for resources by appealing to wishful thinking
or over-promising results is rather less benign (I'm free to say,
having never been in the position of having
to compete for grant money), and recruiting the rubes into "scientific",
or science-fictional, religious belief fully deserves to be
exposed as such.

Jean Diogo said...

Gareth,

"I sometimes play old computer games in emulators. I no longer own an original playstation or a super nintendo, but I am still able to replicate the purpose of that original hardware via simulation - playing games. It does not matter that it's "only" a simulation, because the simulation still performs the same purpose as the original. It doesn't even matter if the simulation is not 100% perfect internally and a few tricks are used to speed things up, so long as the end result is the same."

You lose yourself in a bad metaphor when you start comparing brain and mind with emulation, as if the very concept of emulation was applicable to something further than a computer system.

"Mathematical models that replace biological functions already exist in a simple form and are widely used."

'Replacing' is not the propper word. Concrete artifacts are not math models. Anyways, even if it wasn't a complete nonsense, mind uploading has nothing to do with extending one's life. Improving our healthcare system has.

Dale,

"One can easily concede the possibility in principle that phenomena legible as "minds" might be instantiated on non-biological structures while at once taking seriously that all consciousness properly so called has always been biological, that our understanding of consciousness and intelligence as phenomena is conspicuously incomplete, and that believers in the program of building artificial intelligence as a cohort often exhibit overconfidence incompatible with their history of failure, rely on reductive understandings of mind that have gotten them nowhere for good reason, regularly exhibit pathological hostility to the biological incarnation of mind and sociopathic hostility to the social performance of intelligence."

Just quoting this for my own fan service. You are fantastic!

Dale Carrico said...

Jean -- thank you very much.

jimf said...

> The thing is, we're not talking about the equivalent
> of modelling a multics machine on an 8008, we're talking
> about the equivalent of running a PDP-11 on a modern i7
> (or indeed, my example of a SNES or PS1 emulator).
> If something is possible to simulate in principle then it
> becomes a question of "can we do this in practice?" and
> then "can we do this in realtime in practice?"

Another place where this analogy breaks down has to do with
the fact that a PDP-11, let's say, is -- for the purposes of
simulation -- a well-defined abstract machine. To run an
operating system on it (like RSX-11 or RSTS) all you need,
basically, is the definition of the registers and instruction
set operations out of the processor handbook, and whatever aspects
of the peripheral devices that an OS "sees". You don't
have to worry about a particular model PDP-11's microcode implementation,
let alone the logic circuits out of which it was built.
The abstract description of the instruction-set-level
virtual machine you're constructing in software is cleanly separable
from whatever physical instantiation the older machine once had.

This abstraction level can even be shifted depending on your requirements --
e.g., the SimH simulator for an IBM 1620 or 1401 will run
programs for those machines, but it can't be used to drive
a realistic simulation of the console blinking lights -- you'd
need a "cycle level" (rather than just an "instruction level")
simulator to do that. Similarly the Hercules IBM mainframe
simulator will provide instruction-level simulation for any
IBM System/360 right up through a modern System z, but nevertheless
one retrocomputing enthusiast was motivated to implement
the microcode of a specific model -- the 360/30 -- in VHDL
(for a Xilinx FPGA) just so he could get his simulator's console display
to mimic the original machine's hardware console.
(I once tried to find out if I could get a Xilinx software simulator
so I could run that thing without having to mess with a
physical FPGA demonstration board, but I didn't find one. ;-> ).

But the point is, in all of these cases the **abstract machine**,
whether intended to run programs or to run programs and blink
the lights too, is cleanly separable from the software and
hardware layers "beneath" it.

This is not (or at any rate, certainly not yet) true of biological
brains. If we had to build computer simulators, not just at
the instruction-set level, but by simulating the behavior of a
particular piece of hardware all the way down to
charge-carriers propagating through conductors or semiconductors,
then the job would be on an altogether different scale.
We can't really say, at this point, whether brain simulation
via digital computer is possible even **in principle**. And we
don't have a clue what other technologies could tackle the job.
Of course, hard-core >Hists would be likely to start hand-waving at this
point about the coming wonders of Nanotechnology. (Or in Hugo de Garis'
case, femtotechnology. ;-> ).

Gareth Nelson said...

I'll respond properly later, but as a geek I must ask: do you have info on that microcode implementation?

I love retro computing.

Dale Carrico said...

All this endless talking, talking, talking is keeping you from coding your Robot God you know. Get to work or you'll never prove us deathist luddites wrong.

jimf said...

> as a geek I must ask: do you have info on that microcode implementation?

Yes, the guy's name is Lawrence Wilkinson, and all the info is
here:
http://www.ljw.me.uk/ibm360/vhdl/
There are links there to everything you need to recreate
his IBM 2030 CPU implementation.

There's also a link to his 2011 OSHUG presentation:
https://skillsmatter.com/skillscasts/2115-computer-conservation-with-fpgas

And he has a YouTube channel (as "ibm2030") with a demo
of his simulator:
https://www.youtube.com/watch?v=walWU2MQ2OM

There's another demo video at:
IBM SYSTEM/360 Recreation at Silicon Dreams 2013
https://www.youtube.com/watch?v=HffTSo9zpYI

And it turns out (something I didn't know until just
now) that there's **another** microcode-level 360
simulation, this time for a 360/65, by one Camiel Venderhoeven,
who uses his to operate a real 360/65 front panel
(not just a video representation of one):
IBM 360 emulator counting up
https://www.youtube.com/watch?v=fv6WK5QiG1Q

Cool stuff.

jimf said...

Apropos both the IBM 360 and AI (as in Artificial Intelligence ;-> ),
there's a terrific SF novel I first read many years ago called
_The Adolescence of P-1_ by one Thomas J. Ryan:
http://www.amazon.com/Adolescence-P-1-T-Ryan/dp/0020248806/
https://en.wikipedia.org/wiki/The_Adolescence_of_P-1

Set in the early 70's, the novel describes the accidental
unleashing of artificial intelligence by an idiot-savant
college kid. In the 360-370 mainframe era no less, which
adds a layer of camp by today's standards. It's a real
hoot, but it's also quite decent SF. There are tense
scenes in computer rooms, and lots of IBM lingo like
"IPL" and "Sysres". ;->

Highly recommended.

Dale Carrico said...

You are totally flirting, aren't you.

jimf said...

> You are totally flirting, aren't you.

Pwned! :-0

Gareth Nelson said...

I have to admit, IBM 360 consoles are downright sexy.

Back on topic....

There do exist transistor-level simulations of CPU designs, which is the kind of equivalent to neuron-level brain simulations. My point with the emulator analogy is that ultimately the internal implementation does not matter, only the end results.

Dale Carrico said...

kind of equivalent to neuron-level brain simulations

Science!

the internal implementation does not matter

Some materialist you turned out to be.

Pray however you want, but keep it in church.

jimf said...

> There do exist transistor-level simulations of CPU designs. . .

Well, in the case of brains, we don't even know what the
"transistors" might turn out to be.

> which is. . . kind of equivalent to neuron-level brain
> simulations.

Presumably you're not suggesting a simple one-neuron-to-
one-transistor mapping here. That does remind me, however,
of a harrumph that came from somebody on the Extropians'
list back in the day (back in my day ca. 15 years ago, that is) who
thought it likely that a single-transistor-to-single-neuron
mapping ought indeed to be quite sufficient, thank you
very much, to corral a brain's functionality.

But back on the "just gimme the mind, never mind the
brain simulation" front: 40-50 years ago, some psychologists
escaping from the straitjacket of behaviorism looked to
computer science (encouraged, no doubt, by the hopes of the
practitioners of what came to be known as Good Old-Fashioned AI)
to find an "information-processing" basis for the human mind
(leaving the messy and difficult brain to the biologists). Hence
the "cognitive psychology" that Gerald Edelman was reacting
against a decade or two later.

Edelman (and others who shared his views, such as
George Lakoff) was unpersuaded that traditional top-down
AI will ever be able to produce general-purpose machines able
to deal intelligently with the messiness and unpredictability of the
world, while at the same time avoiding a correspondingly complex
(and expensive) messiness in their own innards. Edelman cites
three maxims that summarize his position in this regard:

1. "Being comes first, describing second... [N]ot only is
it impossible to generate being by mere describing, but,
in the proper order of things, being precedes
describing both ontologically and chronologically"

2. "Doing... precedes understanding... [A]nimals can solve
problems that they certainly do not understand logically... [W]e
[humans] choose the right strategy before we understand why...
[W]e use a [grammatical] rule before we understand what it is;
and, finally... we learn how to speak before we know anything
about syntax"

3. "Selectionism precedes logic." "Logic is... a
human activity of great power and subtlety... [but] [l]ogic is
not necessary for the emergence of animal bodies and brains, as
it obviously is to the construction and operation of a
computer... [S]electionist principles apply to brains
and... logical ones are learned later by individuals with brains"

-- Edelman and Giulio Tonono, _A Universe of Consciousness_,
pp. 15-16.

"It is selection -- natural and somatic
-- that gave rise to language and to metaphor, and it is
selection, not logic, that underlies pattern recognition and
thinking in metaphorical terms. Thought is thus ultimately based
on our bodily interactions and structure, and its powers are
therefore limited in some degree. Our capacity for pattern
recognition may nevertheless exceed the power to prove
propositions by logical means... This realization does not, of
course, imply that selection can take the place of logic, nor
does it deny the enormous power of logical operations. In the
realm of either organisms or of the synthetic artifacts that we
may someday build, we conjecture that there are only two
fundamental kinds -- Turing machines and selectional systems.
Inasmuch as the latter preceded the emergence of the former in
evolution, we conclude that selection is biologically the more
fundamental process." (_UoC_ p. 214).

Ever-hopeful transhumanists will retort at this point that,
after all, it wasn't necessary to make a detailed simulation
of a bird in order to get to the passenger jet. Hope springs
eternal. Stay tuned.

Dale Carrico said...

"The laughed at the Wright Brothers too!"

"For heaven's sake take your thumb out of your mouth."

Gareth Nelson said...

"the internal implementation does not matter

Some materialist you turned out to be."

Bit of a non sequitur that. I say the internal implementation does not matter so long as the external behaviour still yields intelligence, in what way does that contradict materialism? If anything, claiming that it matters whether there's neurons or silicon chips implementing intelligent behaviour is claiming there's something important about neurons that goes beyond their material behaviour.

"Presumably you're not suggesting a simple one-neuron-to-
one-transistor mapping here. That does remind me, however,
of a harrumph that came from somebody on the Extropians'
list back in the day (back in my day ca. 15 years ago, that is) who
thought it likely that a single-transistor-to-single-neuron
mapping ought indeed to be quite sufficient, thank you
very much, to corral a brain's functionality."

That would indeed be very silly, what I was saying is that neurons are the primitive units of brains in a similar way to how transistors are the primitive unit of digital computers.

"Hence the "cognitive psychology" that Gerald Edelman was reacting against a decade or two later."

Please don't tell me you think cognitive psychology is somehow invalid. It would explain a lot though.....

The human mind is not immune from scientific investigation and understanding, and neither is the brain (the physical implementation of the mind). That should be a fairly uncontroversial viewpoint.

I simply go one further and say that human brains are not immune from simulation, and simulating a brain would automatically get you a mind.

Dale Carrico said...

In response to your (repeated) declaration that "the internal [by which you mean the actually-existing material] implementation [of intelligence] does not matter" I quipped: Some materialist you turned out to be." You reacted, with robotic predictability:

Bit of a non sequitur that. I say the internal implementation does not matter so long as the external behaviour still yields intelligence, in what way does that contradict materialism? If anything, claiming that it matters whether there's neurons or silicon chips implementing intelligent behaviour is claiming there's something important about neurons that goes beyond their material behaviour.

An actual materialist would grasp that the actually-existing material incarnation of minds, like the actually-existing material carrier of information, is non-negligible to the mind, to the information. The glib treatment of material differences as matters of utter indifference, as perfectly inter-translatable without loss, as cheerfully dispensable is hardly the attitude of a materialist.

Once again, you airily refer to "silicon chips implementing intelligent behavior" when that has never once happened and looks nothing like something about to happen and the very possibility of which is central to the present dispute. However invigorating the image of this AI is in your mind -- it is not real, nor is it a falsifiable thought-experiment, nor is it a destiny, nor is it a burning bush, nor is it writing on a wall, and those of us who fail to be moved as you are by it are not denying reality, its stipulated properties are not facts in evidence. You will deny that you are claiming AI is real or would be "easy" -- but time after time after time you conjure up these fancies and attribute properties to them with which skeptics presumably have to deal, just because you want them to be true so fervently. Just as well argue how many angels can dance on a pin head.

And then, too, once again, you insinuate the recognition that such real-world intelligence that actually exists all happens to be materialized in biological organization amounts to positing something magical or supernatural about brains. No, Gareth: the intelligence that exists is biological and the artificial intelligence to which you attribute all sorts of properties does not exist. To entertain the logical possibility that phenomena legible to us as intelligent might be materialized otherwise does not mean that they are, that we can engineer them, or that we know enough about the intelligence we materially encounter to be of any help were we to want to engineer intelligence otherwise. None of that is implied in the realization that there is no reason to treat intelligence of somehow supernatural.

Materialism about mind demands recognition that the materialization of such minds as are in evidence is biological. That intelligence could be materialized otherwise is possible, but not necessarily plausible, affordable, or even useful. Maybe it would be, maybe not. Faith-based techno-transcendental investment of AI with wish-fulfillment fantasies of an overcoming of the scary force of contingency in life, an arrival at omnicompetence no longer bedeviled by the humiliations of error or miscommunication, the driving of engines of superabundance delivering treasure beyond the dreams of avarice, or offering up digital immortalization of an "info-soul" in better-than-real virtuality may make AI seem so desirable that you want to pretend we know enough to know how do build it when we do not has nothing to do with science or materialism. Your attitude is common or garden variety religiosity of the most blatant kind. Even if you wear a labcoat rather than a priest's vestments, it's not like we can't see it's still from Party City.

Dale Carrico said...

The human mind is not immune from scientific investigation and understanding, and neither is the brain (the physical implementation of the mind). That should be a fairly uncontroversial viewpoint. I simply go one further and say that human brains are not immune from simulation, and simulating a brain would automatically get you a mind.

No one has denied that intelligence can be studied and better understood. I wonder whether your parenthetic description of the brain as "the physical implementation of the mind" already sets the stage for your desired scene of an interested agent implementing an intelligence when there is actually no reason to assume such a thing where the biologically incarnated mind is concerned. When you say "you simply go one further" in turning to the claim that simulating a brain automatically gets you a mind I disagree that there is anything "simple" about that leap, or that it is any sense a logical elaboration of similar character to the preceding (as you insinuate by the word "further"). Not only does simulating a brain not obviously or necessarily "automatically" get you a mind, it quite obviously does not, and necessarily not get you the mind so simulated. To say otherwise is not materialist, but immaterialist -- but worse it is palpably insane. You are not a picture of you, and a picture of brain is not a brain, and a moving picture of a mind's operation in some respects is not the mind's operation. You may be stupid and insensitive enough not to see the difference between a romantic partner and a fuck doll got up to look like that romantic partner, but you should not necessarily expect others to be so dull if you bring your doll to meet the family or hope to elude prosecution for murdering your partner when the police come calling.

jimf said...

> Please don't tell me you think cognitive psychology is
> somehow invalid. It would explain a lot though.....

I was referring to this, for what it's worth:

Gerald M. Edelman, _Bright Air, Brilliant Fire_,
Chapter 2, "Putting the Mind Back into Nature".
--------------
In the last few decades, practitioners in the field of cognitive science
have made serious and extensive attempts to transcend the limitations
of behaviorism. Cognitive science is an interdisciplinary effort
drawing on psychology, computer science and artificial intelligence,
aspects of neurobiology and linguistics, and philosophy. Emboldened by
an apparent convergence of interests, some scientists in these fields
have chosen not to reject mental functions out of hand as the
behaviorists did. Instead, they have relied on the concept of
mental representations and on a set of assumptions collectively called
the functionalist position. From this viewpoint, people behave
according to knowledge made up of symbolic mental representations.
Cognition consists of the manipulation of these symbols. Psychological
phenomena are described in terms of functional processes. The efficacy
of such processes resides in the possibility of interpreting items
as symbols in an abstract and well-defined way, according to a set
of unequivocal rules. Such a set of rules constitutes what is
known as a syntax.

The exercise of these syntactical rules is a form of computation. . .
Computation is assumed to be largely independent of the structure and
the mode of development of the nervous system, just as a piece of
computer software can run on different machines with different
architectures and is thus "independent" of them. A related idea
is the notion that the brain (or more correctly, the mind) is like
a computer and the world is like a piece of computer tape, and that
for the most part the world is so ordered that signals received can
be "read" in terms of logical thought.

Such well-defined functional processes, it is said, constitute semantic
representations, by which it is meant that they unequivocally specify
what their symbols represent in the world. In its strongest form,
this view proposes that the substrate of all mental activity is in
fact a language of thought -- a language that has been called "mentalese". . .

This point of view -- called cognitivism by some -- has had a great vogue
and has prompted a burst of psychological work of great interest
and value. Accompanying it have been a set of remarkable ideas.
One is that human beings are born with a language acquisition device
containing the rules for syntax and constituting a universal grammar.
Another is the idea, called objectivism, that an unequivocal description
of reality can be given by science (most ideally by physics).
This description helps justify the relations between syntactical
processes or rules and things or events -- the relations that consitute
semantic representations. Yet another idea is that the brain orders
objects in the "real" world according to classical categories,
which are categories defined by sets of singly necessary and jointly
sufficient conditions.

jimf said...

I cannot overemphasize the degree to which these ideas or their
variants pervade modern science. They are global and endemic. But I
must also add that the cognitivist enterprise rests on a set of
unexamined assumptions. One of its most curious deficiencies is
that it makes only marginal reference to the biological foundations
that underlie the mechanisms it purports to explain. The result is
a scientific deviation as great as that of the behaviorism it has
attempted to supplant. The critical errors underlying this deviation
are as unperceived by most cognitive scientists as relativity
was before Einstein and heliocentrism was before Copernicus.

What is it these scholars are missing, and why is it critical?
They are missing the idea that a description of the mind cannot
proceed "liberally" -- that is, in the absence of a detailed
biological description of the brain. They are disregarding a large
body of evidence that undermines the view that the brain is a
kind of computer. They are ignoring evidence showing that the way
in which the categorization of objects and events occurs in animals
and in humans does not at all resemble logic or computation.
And they are confusing the formal powers of physics as created by
human observers with the presumption that the ideas of physics
can deal with biological systems that have evolved in historical
ways.

I claim that the entire structure on which the cognitivist enterprise
is based is incoherent and not borne out by the facts. I do not attempt
to support this strong claim in the text of this book; to do so
would require ranging over many disciplines with many unshared
assumptions before arriving at my main thesis. For this reason, I
have put my arguments against the forms of pure cognitivism into
a Critical Postscript placed at the end of this book. . .

This essay addresses what I believe to be a series of category mistakes.
The first is the proposal that the solution to the problems of
consciousness will come from the resolution of some dilemmas of
physics. The second is the suggestion that computation and
artificial intelligence will yield the answers. Third, and most
egregious, is the notion that the whole enterprise can proceed by
studying behavior, mental performance and competence, and language
under the assumptions of functionalism without first understanding
the underlying biology. . .

The principle I will follow is this: There must be ways to put
the mind back into nature that are concordant with how it got
there in the first place. These ways must heed what we have
learned from the theory of evolution. In the course of evolution,
bodies came to have minds. But it is not enough to say that the
mind is embodied; one must say how. To do that we have to take
a look at the brain and the nervous system and at the
structural and functional problems they present.
====

Dale Carrico said...

Can I add my own endorsement of Gerald M. Edelman's beautiful and brilliant book, Bright Air, Brilliant Fire? It has to be well over a decade since I first read this book -- I hardly recommend it as the last word on these topics, nor as a text with which I agree on every word, but it is so sensible and forceful it really does seem to me quite indispensable.

jimf said...

> Gerald M. Edelman, _Bright Air, Brilliant Fire_,
> Chapter 2, "Putting the Mind Back into Nature".
> --------------
> . . .
> I cannot overemphasize the degree to which these ideas or their
> variants pervade modern science. They are global and endemic. . .
> ====

From the comment thread at
http://amormundi.blogspot.com/2008/03/giulio-demands-clarifications-and-i.html
-----------------
In a book I mentioned 7 [now 14!] years ago (my God!) on the Extropians' list
[ http://extropians.weidai.com/extropians.2Q01/1578.html ],
_Going Inside: A Tour Round a Single Moment of Consciousness_
by John McCrone, 1999; Chapter 12 "Getting It Backwards",
the author remarks:

"[P]ersonally speaking, the biggest change for me
was not how much new needed to be learnt, but how much that was
old and deeply buried needed to be unlearnt. I thought my
roundabout route into the subject would leave me well prepared.
I spent most of the 1980s dividing my time between computer
science and anthropology. Following at first-hand the attempts
of technologists to build intelligent machines would be a good
way of seeing where cognitive psychology fell short of the mark,
while taking in the bigger picture -- looking at what is known
about the human evolutionary story -- ought to highlight the
purposes for which brains are really designed. It would be a
pincer movement that should result in the known facts about the
brain making more sense.

Yet it took many years, many conversations, and many false starts
to discover that the real problem was not mastering a mass of
detail but making the right shift in viewpoint. Despite
everything, a standard reductionist and computational outlook on
life had taken deep root in my thinking, shaping what I expected
to see and making it hard to appreciate anything or anyone who
was not coming from the same direction. Getting the fundamental
of what dynamic systems were all about was easy enough, but then
moving on from there to find some sort of balance between
computational and dynamic thinking was extraordinarily difficult.
Getting used to the idea of plastic structure or guided
competitions needed plenty of mental gymnastics...

[A]s I began to feel more at home with this more organic way of
thinking, it also became plain how many others were groping their
way to the same sort of accommodation -- psychologists and brain
researchers who, because of the lack of an established vocabulary
or stock of metaphors, had often sounded as if they were all
talking about completely different things when, in fact, the same
basic insights were driving their work."
====

jimf said...

It strikes me that this conversation (/disagreement) has
been proceeding along three different fronts (with, perhaps,
three different viewpoints) that have not yet been clearly
distinguished:

1. Belief in/doubts about GOFAI ("Good Old-Fashioned AI") -- the
50's/60's Allen Newell/Herbert Simon/Seymour Papert/John McCarthy/Marvin Minsky
et al. project to replicate an abstract human "mind" (or salient aspects
of one, such as natural-language understanding) by performing syntactical
manipulations of symbolic representations of the world using
digital computers. The hope initially attached to this approach
to AI has been fading for decades. Almost a quarter of a century
ago, in the second edition of his book, Hubert Dreyfus called
GOFAI a "degenerating research program". It's still degenerating,
as far as I know.

This Dreyfus quote is also from the 7-year-old comment thread at
http://amormundi.blogspot.com/2008/03/giulio-demands-clarifications-and-i.html :

-----------
Almost half a century ago [as of 1992] computer pioneer
Alan Turing suggested that a high-speed digital
computer, programmed with rules and facts, might exhibit
intelligent behavior. Thus was born the field later
called artificial intelligence (AI). After fifty
years of effort [make it 70, now], however, it is now clear
to all but a few diehards that this attempt to produce artificial
intelligence has failed. This failure does not mean
this sort of AI is impossible; no one has been able
to come up with a negative proof. Rather, it has
turned out that, for the time being at least, the
research program based on the assumption that human
beings produce intelligence using facts and rules
has reached a dead end, and there is no reason to
think it could ever succeed. Indeed, what John
Haugeland has called Good Old-Fashioned AI (GOFAI)
is a paradigm case of what philosophers of science
call a degenerating research program.

A degenerating research program, as defined by Imre
Lakatos, is a scientific enterprise that starts out
with great promise, offering a new approach that
leads to impressive results in a limited domain.
Almost inevitably researchers will want to try to apply
the approach more broadly, starting with problems
that are in some way similar to the original one.
As long as it succeeds, the research program expands
and attracts followers. If, however, researchers
start encountering unexpected but important phenomena
that consistently resist the new techniques, the
program will stagnate, and researchers will abandon
it as soon as a progressive alternative approach
becomes available.
====

Dale and I agree in our skepticism about this one.
Gareth Nelson, it would seem (and many if not most >Hists, I expect)
still holds out hope here. I think it's a common
failing of computer programmers. Too close to their
own toys, as I said before. ;->

jimf said...

2. The notion that, even if we jettison the functionalist/cognitivist/symbol-manipulation
approach of GOFAI, we still might **simulate** the low-level dynamic messiness of a
biological brain and get to AI from the bottom up instead of the top down.
Like Gerald Edelman's series of "Darwin" robots or, at an even lower and putatively
more biologically-accurate level, Henry Markram's "Blue Brain" project.

Gareth seems to be on-board with this approach as well, and says somewhere
above that he thinks a hybrid of the biological-simulation
approach **and** the GOFAI approach might be the ticket to AI
(or AGI, as Ben Goertzel prefers to call it).

Dale still dismisses this, saying that a "model" of a human mind is not
the same as a human mind, just as a picture of you is not you.

I am less willing to dismiss this on purely philosophical grounds. I am
willing to concede that **if** there were digital computers fast enough
and with enough storage to simulate biological mechanisms **at whatever
level of detail turned out to be necessary** (which is something we don't
know yet) **and** if this sufficiently-detailed digital simulation could
be connected either to a living body with equally-miraculously (by today's
standards) fine-grained sensors and transducers, **or** to a (sufficiently
fine-grained) simulation of a human body immersed in a (sufficiently
fine-grained) simulation of the real word -- we're stacking technological
miracle upon technological miracle here! -- then yes, this hybrid entity
with a human body and a digitally-simulated brain,
I am willing to grant, might be a good-enough approximation
of a human being (though hardly "indistinguishable" from an ordinary
human being, and the poor guy would certainly find verself
playing a very odd role indeed in human society, if ve were the first one).
I'm even willing to concede (piling more miracles on top of
miracles by granting the existence of those super-duper-nanobots)
the possibility of "uploading" a particular human personality,
with memories intact, using something like the
Moravec transfer (though again, the "upload" would find verself in
extremely different circumstances from the original, immediately
upon awakening). This is still not "modelling" in any ordinary
sense of the word in which it occurs in contemporary scientific
practice! It's an as-yet-unrealized (except in the fictional realm
of the SF novel) **substitution** of a digitally-simulated
phenomenon for the phenomenon itself (currently
unrealized, that is, except in the comparatively trivial case
in which the phenomenon is an abstract description of another
digital computer).

**However**, I am unpersuaded, Moravec and Kurzweil and their
fellow-travellers notwithstanding, that Moore's Law and the "acceleration of
technology" are going to make this a sure thing by 2045. I am not
even persuaded that we know enough to be able to predict
that such a thing might happen by 20450, or 204500, whether by
means of digital computers or any other technology, assuming a
technological civilization still exists on this planet by then.

The physicist Richard C. Feynman, credited as one of the inventors
of the idea of "nanotechnology", is quoted as having said "There's
plenty of room at the bottom." Maybe there is. Hugo de Garis
thinks we'll be computing using subatomic particles in the not
too distant future! If they're right, then -- sure, maybe all
of the above science-fictional scenarios are plausible. But others
have suggested that maybe, just maybe, life itself is as close to
the bottom as our universe permits when it comes to, well, life-like
systems (including biologically-based intelligence). If that's
so, then maybe we're stuck with systems that look more-or-less
like naturally-evolved biochemistry.

jimf said...

3. Attitudes toward the whole Transhumanist/Singularitarian
mishegas. What Richard L. Jones once called the "belief package"
( http://www.softmachines.org/wordpress/?p=1607 ), or what
Dale commonly refers to as the three "omni-predicates" of >Hist
discourse: omniscience=superintelligence; omnipotence=super-enhancements
(including super-longevity); omnibenevolence=superabundance.
http://amormundi.blogspot.com/2007/10/superlative-schema.html

This is a very large topic indeed. It has to do with politics,
mainly the politics of libertarianism (Paulina Boorsook,
_Cyberselfish_, Barbrook & Cameron, _The Californian Ideology_),
religious yearnings (the "Rapture of the Nerds"),
cult formation (especially sci-fi tinged cults, such as
Ayn Rand's [or Nathaniel Branden's, if you prefer] "Objectivism",
L. Ron Hubbard's "Scientology", or even Joseph Smith's Mormonism!),
psychology (including narcissism and psychopathy/sociopathy),
and other general subjects. Very broad indeed!

Forgive me for putting it this insultingly, but I fear Gareth
may still be savoring the Kool-Aid here.

Dale and I are long past this phase, though we once both
participated on the Extropians' mailing list, around or
before the turn of the century. When we get snotty
(sometimes reflexively so ;-> ), it's the taste of the Kool-Aid
we're reacting to, which we no longer enjoy, I'm afraid.

Dale Carrico said...

Dale still dismisses this, saying that a "model" of a human mind is not the same as a human mind, just as a picture of you is not you.

You may be right that I am a bit more skeptical than you are on this second question -- I am not sure, your formulation seems pretty congenial after a first read -- all I would say is that the context for all this was the futurological conceit of uploading in particular, and I do indeed still regard that notion as too incoherent in principle to draw any comfort from the points you are making.

Even if, as Gareth seems to be implying, there is a "weak" uploading project in which good-enough simulations can replace people for an (insensitive enough?) audience apart from a "strong" uploading project in which some sort of info-souls are somehow translated/migrated and thus, again somehow, immortalized digitally, I think both notions are bedeviled by conceptual and rhetorical and political nonsense rendering them unworthy of serious consideration (except as sfnal conceits doing literary kinds of work). I am not sure anybody but Gareth actually maintains this strong/weak distinction quite the way he seems to do, and I'm not sure his endorsement of the weak version doesn't drift into the strong version in any case in its assumptions and aspirations.

Dale Carrico said...

Even back in 93 I was hate-reading Extropians -- I once thought/hoped James Hughes socialist strong reading of transhumanism might yield a useful technoprogressivism, but boy was I wrong to hold out that hope! I will admit that as an avid sf reader with a glimpse of the proto-transhumanoid sub(cult)ure via the L5 Society and Durk Pearson and Sandy Shaw I was a bit transhumanish at age eleven or so -- with a woolly sense that longevity medicine and nano/femto-superabundance should be the next step after the Space Age. The least acquaintance with consensus science disabused me of that nonsense. It's a bit like the way first contact, pretty much in my first term away from home in college, with comparative religion made me a cheerful atheist and confrontation with an actually diverse world made the parochial pieties of market ideologies instantly hilarious.

Dale Carrico said...

Jim, I'm going to turn those last three Moot-gems into a guest post tomorrow if you don't object.

jimf said...

> Jim, I'm going to turn those last three Moot-gems into
> a guest post tomorrow if you don't object.

Sure, that's fine with me.

> Even back in 93 I was hate-reading Extropians. . .

Well, I was a late-comer (as usual ;-> ). As I'm sure I've mentioned
before, I found out about the whole on-line >Hist scene in
the summer of '97 (about 6 months after I first got the Web
at home via WebTV) when, Alta-Vista'ing Iain Banks' "Culture"
(which I'd just come across a reference to while thumbing
through a sci-fi magazine at one of the local Barnes & Nobles)
I crashed into Eliezer Yudkowsky's "Staring into the Singularity".
And thence to the Extropians. (I missed the 'zine era with them.)
I had heard of the Singularity before that -- I'd already read
Vinge's _Across Realtime_ by then.

> I was a bit transhumanish at age eleven or so. . .

Oh, me too. I read Arthur C. Clarke's _Profiles of the Future_
soon as it appeared on the supermarket paperback rack in --
what, '62, '63? I remember freaking out my 6th-grade homeroom
teacher, in a free-for-all late-afternoon discussion, by mentioning
Clarke's views on the prospects for human immortality.

And my favorite original _The Outer Limits_ episode was "The Sixth
Finger", which aired just a few weeks before Kennedy was assassinated.
I didn't find out 'til many years later that parts of
David McCallum's speech at the end was lifted practically verbatim
from Shaw's _Back to Methuselah_ (which was, of course, on the
Extropians' recommended reading list).

> . . .the L5 Society and Durk Pearson and Sandy Shaw. . .

I had a hardcover copy of Pearson & Shaw's _Life Extension_
back in the summer of '82. The summer when The Human League's
"Don't You Want Me?" was playing incessantly on the radio. ;->

I knew folks interested in the L5 society, but for some reason
I was never that much of a space geek. I loved _Star Trek_,
of course, but I was less enthralled by the real-live space
program. It just didn't live up to the TV version. ;->

jimf said...

> Gareth Nelson said...
>
> I have to admit, IBM 360 consoles are downright sexy.

Indeed. A guy named John Savard has documented the
front panels HTML, not photorealistic) of the entire
IBM 360 line. (I admire obsessives who stay **focused** ;-> ):
http://www.quadibloc.com/comp/pan04.htm
And other famous machines:
http://www.quadibloc.com/comp/panint.htm

Apropos the DEC PDP-1 & 4, Mr. Savard comments:

"[I]nstead of being rectangular, like most computer front panels, the edges
were angled so as to produce a teardrop-like distorted hexagon. . .
a shape that would not have been out of place on a
flying saucer in an old science-fiction movie. . .
[T]he idea that a computer should. . . look futuristic,
as befits something at the peak of current technology,
was part of the culture
at DEC."

Nevertheless, while a picture of a computer certainly isn't
a computer, neither is a computer simulation of a computer
(though it is certainly a computer) always what you're
looking for.

A poignant thread on the SimH mailing list
alludes to this:

http://mailman.trailing-edge.com/pipermail/simh/2016-February/014588.html
----------------
SIMH and physical hardware
Zachary Kline
Wed Feb 10 2016

. . .

[For] a newbie. . ., all SIMH machines are very similar. . .
[T]he “feel,” of the original hardware. . . [isn't]
there. Simh can emulate tons of hardware from different
manufacturers, but. . . [won't] tell me what it was like to actually
use the devices in a physical sense.

As a blind user, I’m doubly interested in this kind of physicality because
I experience the world through touch and sound. . .
[T]hese notional machines. . . are all reduced to. . .
abstractions at a console prompt. It’s hard to imagine
a thing I was far too young to experience. . .

---

David Gesswein
Wed Feb 10 2016

. . .[No e]mulator could match. . . [the] feel of
typing on a teletype. . . [Or] the warm oil smell
they give off.

---

lists at openmailbox.org
Wed Feb 10 2016

I felt a twinge of sorrow reading your post. . .

. . .many people who used these machines back
in the day never saw the machine or came near it. . .
[U]niversities and businesses. . . kept [machines] in [a machine]
room. . . with air conditioning and cabling. . . under
[a] raised floor. . . [U]sers and programmers sat in
terminal rooms or at their desks. . . and
typically never saw the blinkenlights or. . . felt the disk drives
shaking the floor. . .

A lot of PDP gear was in small labs where most students didn't go. . .

[I]n big companies that used mainframes [t]he programmers. . .
were not allowed in the machine room. The doors had combination
locks and only authorized personnel. . . were allowed in there.
Slipping into the machine room with an operator
buddy was grounds for dismissal. . . Some data centers. .
did have glass walls onto the floor. But most did not.

The main way that we get the sense today of "wow this is great" is seeing
the terminal displays with the same layouts and prompts as we did in the old
days. . .

---

Bob Supnik
Wed Feb 10 EST 2016

The original article on restoration vs simulation
(http://www.hpl.hp.com/hpjournal/dtj/vol8num3/vol8num3art2.pdf) still
provides good insights into what's achievable by restoring old systems
vs simulating them. Co-author Max Burnet's collection of working DEC
gear provides a far better aural, and tactile recreation of using old
DEC systems than SimH. Unfortunately, it's a lot more difficult to access.

The Living Computer Museum in Seattle has many working systems. . .
and is renovating more. The Computer History Museum in [Mountain View, CA]
has some working exhibits. . . too. . .
====

Smell of warm oil, indeed.

But they are crazy hard to get (and keep) working.

Debugging the 1959 IBM 1401 Computer at the Computer History Musesum
https://www.youtube.com/watch?v=PwftXqJu8hs

Rhode Island Computer Museum DEC PDP-9 Restoration
http://www.ricomputermuseum.org/Home/equipment/dec-pdp-9/pdp-9-restoration