The heretic thought that consciousness is nothing but the way the various reactions going on in the brain feels when that brain happens to be yours ... [surely] sounds like hollowing out of human experience of the worst kind imaginable.
I am a materialist on matters of mind, I have been a cheerful atheist for a quarter century, I am a champion of consensus science, not a scientist by any means but hardly uninformed about technoscience questions, and my politics are those of secular progressive consensual democracy.
You are simply straightforwardly not understanding my point. I don't agree that it is particularly heretical or harrowing to attribute consciousness to neurochemistry in an organismic brain.
Indeed, that statement shouldn't be the least bit of a surprise to you since one of my repeated accusations against the so-called Singularitarians, dead-enders as most of them are in the old school Program of Strong AI (and this has even been a repeated accusation of mine quite literally in the very thread to which you are contributing and which I would imagine, then, that you have taken the time to read), is that despite their own materialism they tend to treat the actual substantial form of materialization hitherto associated with intelligence as comparatively negligible, fancying that complex software and the complex behaviors it can provoke can be properly denominated as "intelligence" without arising out of anything like the dynamisms or exhibiting anything like the dynamisms of the actually-existing organismic intelligences from which they are appropriating the term.
Despite these failures, their discourse is nonetheless saturated with the paraphernalia of intelligence as it is actually incarnated in the world. Discussions of artificial intelligence inevitably lead into discussions of intentions, values, optimizations, smartness, personhood, rights, friendliness and so on, none with any good justification.
To a certain extent these figurative borrowings from one domain to another to prop up our understanding of a new phenomena and new problems are inevitable and useful. The term in rhetoric to describe this figure is catachresis, in case you're interested (I teach this stuff to my university students), which describes derangements of literal usage to describe phenomena to which they didn't originally apply and also to describe the coinage of new terms to accomplish this (these coinages, after all, typically involve borrowings from other languages and so on).
Usually these borrowings are functionally proto-theoretical, their plausibility builds on the sense, right or wrong, of the analogical or associational propriety of the traffic between the old and the new domain over which the borrowing is taking place. Also, the trace of the older associations of the term reverberate into the new usages, yielding rich ramifying associations that continue to exert their force on the ways new usages play out in the world.
It would seem to me that the attribution of "intelligence" to computers and, subsequently, the reduction of intelligence to computation has been an enormously compelling catachresis that has palpably confused far more than it has illuminated and in fact has yielded a poisonous harvest of incomprehension where matters of testifying to the experiences of critical and abstract and empathetic and passionate and imaginative thinking, understanding, and judgment are concerned, the testifying on which distinctively human forms of agency and meaning actually depend for their abiding intelligibility, force, and flourishing.
It is not materialism as such that has hollowed out the human understanding of our own freedom and agency and meaningfulness, it is an instrumentalization of reason that was never compelled by materialism and which has made its advocates ever more insensitive to and dismissive of the difference between persons and robots, the difference between the exercise of freedom in the presence of one's peers and the exertion of instrumental force translating means into ends. Instrumentality, of course, cannot provide the ends at which its efficiencies should be aimed, and so freedom rewritten in the image of its imperialism is exactly what one would expect of a robot, blind, meaningless, brute-force mistaken as emancipation rather than the radical impoverishment it would be. This is not a problem of materialism, it is a problem of reductionism.
3 comments:
The brain is not a computer and what is happening in the brain is not computations. What a computer does when it calculates is not “thinking” and not “intelligence”. People who say such things are either confused or speaking metaphorically. So far I agree.
What a computer can do however, is to simulate stuff such as chemical reactions and electrical currents.
If (and now it is getting superlative) you then take ALL the reactions going on in the brain (and the rest of the body as well, if one insist), and let the computer simulate them, you will get a simulation of a brain, also known as a brain in a computer, also known as an upload.
One can argue: “but it is not a real brain it is just a simulation! It is not conscious, it just simulates consciousness.”
But if one then simulates a signal in the auditory nerve, corresponding to someone asking “are you conscious?”, the result will be a signal in the hypoglossal nerve (the one controlling the voice) corresponding to the answer “yes”. Does it then make sense to say that this (very hypothetical) being has no intelligence and no consciousness? I think not.
You are simply straightforwardly not understanding my point. I don't agree that it is particularly heretical or harrowing to attribute consciousness to neurochemistry in an organismic brain.It is likely true that I don’t understand you, since I really can’t see how anyone describing themselves as a materialist on matters of mind can disagree with what I have written above. Maybe you don’t. If you don’t disagree let us go another, even more superlative, step further.
The program doing the mentioned simulation will be very long, since it is simulating an awful lot of neurons, reactions and currents. Maybe it is possible to make a shorter program that does the same as the long program, that is give the same output to the same input. If the long program is ”intelligent” the short program is as well. (Let me not mention the obvious next step.)
And maybe it is possible (as in doable) to code this short program ”from scratch” without making the long one first. This is what the various AGI-people are trying to do, in my opinion. I don’t think any of the AGI-programs are going to be successful, since I don’t think this is doable, but I am not ready to dismiss it all out of hand, and I wish them good luck in the attempt.
Now this was a lot of the ”technical stuff”. On to something else.
In my native language, and in English too I think, the word ”materialism” has a dual meaning. The first meaning is the one we are using here, something like ”the belief that everything is made out of matter only”. The second one is more like ”focus on material stuff” as opposed to focus on feelings and ideas. People who disagree with materialism in the fist sense love to knowingly confuse the two, and say things like ”oh you are a materialist, that means you only love your money”.
Therefore I tend to avoid that word.
I much prefer the word ”reductionism”. Many great reductions have been made in the history of science. There are reductions of two theories into one, like the unification of electricity and magnetism. There are also occasions where something that was previously thought to have both a material and an ”esoteric” component, was reduced to matter all the way through.
Instrumentality, of course, cannot provide the ends at which its efficiencies should be aimed, and so freedom rewritten in the image of its imperialism is exactly what one would expect of a robot, blind, meaningless, brute-force mistaken as emancipation rather than the radical impoverishment it would be. This is not a problem of materialism, it is a problem of reductionism.One can, of course, make reductions that are wrong, like reducing people to robots and intelligence to computation. But I still think that the word ”reductionism” is to prefer. (The optimal word however is ”naturalism”. Why isn’t it used more?)
> What a computer can do however, is to simulate stuff such as
> chemical reactions and electrical currents. If (and now it is
> getting superlative) you then take ALL the reactions going on
> in the brain (and the rest of the body as well, if one insist),
> and let the computer simulate them, you will get a simulation
> of a brain. . .
>
> One can argue: “but it is not a real brain it is just a
> simulation! It is not conscious, it just simulates consciousness.”
> . . .
> Does it then make sense to say that this (very
> hypothetical) being has no intelligence and no consciousness?
> I think not.
Well, you won't get the "it's not. . . it just simulates" argument
from me. If it "simulates" well enough, then it **is**, as far
as I'm concerned (at least, it "is", in the limit, to the same
degree as my next-door neighbor, or my friends, or the
dogs I meet, or anyone else I routinely credit with being conscious).
(It will, as you suggest, take more than an isolated brain, though. It'll take
a body -- real or "simulated", and it'll have to be embedded in a social
matrix.)
Whether such a simulation can be performed by any conceivable digital
computer is an empirical question. I have no beef about the empirical
question, and heck, even if the simulation is only limited and
partial, it might be a useful tool in researching the "real thing".
Isn't that what the EPFL's Blue Brain project is all about? (Which was
supposed to be creating a simulation of a single neocortical column
from the brain of a newborn mouse, capable of processing signals
at 1/100 real time, using a massive IBM Blue Gene/L parallel processor --
haven't heard much recently from them, though.)
> The program doing the mentioned simulation will be very long,
> since it is simulating an awful lot of neurons, reactions and currents.
> Maybe it is possible to make a shorter program that does the same
> as the long program, that is give the same output to the same input.
Yeah, "Blue Brain" was going to be investigating that sort of thing,
too. Again, it's an empirical question. If the talent and
resources are available to investigate those questions, then
bring 'em on!
> And maybe it is possible (as in doable) to code this short program
> ”from scratch” without making the long one first. This is what the
> various AGI-people are trying to do, in my opinion. I don’t think
> any of the AGI-programs are going to be successful, since I don’t think
> this is doable, but I am not ready to dismiss it all out of hand, and
> I wish them good luck in the attempt.
Ah, "from scratch". Now we're heading back in the direction of
Good Old-Fashioned AI, which has been failing spectacularly for the
past half century.
There have been sophisticated reservations about that project for
decades, now. There's plenty of literature on the subject. You might
want to **start** with Hubert L. Dreyfus.
These empirical questions are all, or **should** be, simply
mainstream science.
But when they start getting mixed up with 1) science fiction tropes
out of Vernor Vinge and Greg Egan being reified into conferences and
Institutes, 2) guru-wannabes making dire pronouncements about the
end of the world or immortality and Heaven on earth, 3) superficially
"rationalistic" retro-philosophies from decades past (such as Dianetics
and Objectivism) and similar contemporary recycled notions that you
can figure out (and control) how the human mind works through simple
introspection (a la Neuro-Linguistic Programming in the form
of cut-rate self-help paperbacks or priced-for-billionaires "success seminars"
like Keith Raniere's ESP -- "Executive Success Programs") being used as
the basis for (otherwise retro and long discredited) Good Old-Fashioned AI
research proposals [*] then, well -- we have a problem, Houston. Granted,
no worse a problem than any other example of silliness going on in the world
today but still -- a problem.
[*] E.g., Peter Voss
http://www.flickr.com/photos/29246236@N02/2748348964/
http://www.theatlasphere.com/columns/050601-zader-peter-voss-interview.php
-------------------
TA: How has Ayn Rand's philosophy influenced your work?
Voss: I came across Rand relatively late in life,
about 12 years ago.
What a wonderful journey of discovery — while at the
same time experiencing a feeling of "coming home." Overall,
her philosophy helped me clarify my personal values
and goals, and to crystallize my business ethics,
while Objectivist epistemology in particular inspired
crucial aspects of my theory of intelligence.
Rand's explanation of concepts and context provided
valuable insights, even though her views on consciousness
really contradict the possibility of human-level AI.
TA: Which views are you referring to?
Voss: Primarily, the view that volitional choices do
not have antecedent causes. This position implies that
human-level rationality and intelligence are incompatible
with the deterministic nature of machines. A few years
ago I devoted several months to developing and writing
up an approach that resolves this apparent dichotomy.
-------------------
These empirical questions are all, or **should** be, simply
mainstream science.Yes! Yes! It is very nice to know that this is your opinion. Try to tell this to the people at Accelerating Future and they will start loving you. I have never seen it stated this clearly before, so one can not blame people for assuming your (well, mostly Dale’s) position to be a dualistic or vitalistic one.
Given the huge possibilities such technologies (if they are ever invented) will open, it is not surprising that the subject is discussed with an emotional tone of voice.
How has Ayn Rand's philosophy influenced your work?[…]I keep hearing her name on this blog. I must read her at some point. I promise not to take what I read too serious. (I have even heard that her followers formed some sort of cult…)
Post a Comment