"Mitchell" writes:
I notice that no-one has chosen to dispute or otherwise comment on my observation that the human brain gets things done, not just by virtue of being "organismic" (or embodied or fleshy or corporeal), but because its constituent neurons are arranged so as to perform elaborate and highly specific transformations of input to output, which correspond to specific cognitive functions like learning and memory, and which, at the mathematical level of description, fall squarely within the scope of the subfield of theoretical computer science which studies algorithms.
Under other circumstances, I'd be happy to have a freewheeling discussion about the subjective constitution of imputed intentionality in the practice of programming, or the right way to talk about the brain's "computational" properties without losing sight of its physicality, or exactly why it is that consciousness presents a challenge to the usual objectifying approach of natural-scientific ontology.
But however all that works out, and whatever subtle spin on the difference between natural and artificial intelligence best conveys the truth... at a crude and down-to-earth level, it is indisputable that the human brain is full of specialized algorithms, that these do the heavy lifting of cognition, and that such algorithms can execute on digital computers and on networks of digital computers.
That is why you can't handwave away "artificial intelligence" as a conceptual confusion. If you want to insist that the real thing has to involve consciousness and the operation of consciousness, and that this can't occur in digital computers, fine, I might even agree with you. But all that means is that the "artificiality" of AI refers to something a little deeper than the difference between being manufactured and being born. It does not imply any limit on the capacity of machines to emulate and surpass human worldly functionality.
My point is not that our intelligence is "just" embodied, but that it is indispensably so, and in ways that bedevil especially the hopes of those Robot Cultists who hope to code a "friendly" sooper-parental Robot God, or to "migrate" their souls from one materialization to others "intact" and quasi "immortalized."
That you can find maths to describe some or, maybe -- who now knows? (answer: nobody and certainly not you, whatever your confidence on this score, and also certainly not me) -- even much of the flavor of intelligence would scarcely surprise me, inasmuch as maths are, after all, so good at usefully getting at so much of the world's furniture.
I am happy to agree that it may be useful for the moment to describe the brain as performing specialized algorithms, among other things the brain is up to, and it is surely possible that these do what you call the "heavy lifting" of cognition. But that claim is far from "indisputable," and even if it turns out to be right that hardly puts you or anybody in a position to identify "intelligence" with "algorithms" in any case, especially if you concede "intelligence" affective dimensions (which look much more glandular than computational) and social expressions (which look far more like contingent stakeholder struggles in history than like beads clicking on an abacus).
Inasmuch as all the issues to which you allude in your second paragraph -- subjective imputation of intention, doing justice to the materiality that always non-negligibly incarnates information, to which I would add uninterrogated content of recurring metaphors mistaken in their brute repetition for evidence -- suffuse the discourse of GOFAI dead-enders, cybernetic totalists, singularitarians, and upload-immortalists I do think you better get to the "other circumstances" in which you are willing to give serious thought to critiques of them (my own scarcely the most forceful among them) sooner rather than later.
You will forgive me if I declare it seems to me it is you who is still indulging in handwaving here. As an example, in paragraph three, when you go from saying, harmlessly enough, that much human cognition is susceptible to description as algorithmic then make the point, obviously enough, that digital and networked computers execute algorithms, you hope that the wee word "such" can flit by unnoticed, un-interrogated, while still holding up all the weight of the edifice of posited continuities and identities you are counting on for the ideological GOFAI program and cyber-immortalization program to bear its fruits for the faithful. You re-enact much the same handwave in your eventual concession of the "something a little deeper" between even the perfect computers of our fancy and the human intelligences of our worldly reality, which may indeed be big enough and deep enough as differences go to be a difference in kind that is the gulf between the world we share and the techno-transcendence Robot Cultists pine for.
You know, your "colleague" Giulio Prisco likes to accuse me of "vitalism" for such points -- which to my mind would rather be like a phrenologist descrying vitalism in one who voiced skepticism about that pseudo-science at the height of the scam. So far you seem to be making a comparatively more sophisticated case, bless you -- we'll see how long that lasts -- but the lesson of Prisco's foolishness is one you should take to heart.
I for one have never claimed that intelligence is in any sense supernatural, and given its material reality you can hardly expect me to deny it susceptibility of mathematical characterizations. It's true I have not leaped on futurological bandwagons reducing all of intelligence to algorithms (or the whole universe to the same), seeing little need or justification for such hasty grandiloquent generalizations and discerning in them eerily familiar passions for simplicity and certainty (now amplified by futurologists with promises of eternal life and wealth beyond the dreams of avarice) that have bedeviled the history of human thought in ways that make me leery as they should anybody acquainted with that history.
But I am far from thinking it impossible in principle that a non-organismic structure might materially incarnate and exhibit what we would subsequently describe as intelligent behavior -- though none now existing or likely soon to be existing by my skeptical reckoning of the scene do anything like this, and I must say that ecstatic cheerleading to the contrary about online search engines or dead-eyed robotic sex-dolls by AI ideologues scarcely warms me to their cause. Upon creating such a differently-intelligent being, if we ever eventually were to do as now we seem little likely remotely capable of, we might indeed properly invite such a one within the precincts of our moral and interpretative communities, we might attribute to such a one rights (although we seem woefully incapable of doing so even for differently materialized intelligences that are nonetheless our palpable biological kin -- for instance, the great apes, cetaceans).
That such intelligence would be sufficiently similar to human intelligence that we would account it so, welcome it into our moral reckoning, recognize it the bearer of rights, is unclear (and certainly a more relevant discussion than whether some machines might in some ways "surpass human... functionality" which is, of course, a state of affairs that pervades the made world already, long centuries past, and trivially so), and not a subject I consider worthy of much consideration until such time as we look likely to bring such beings into existence. I for one, see nothing remotely like so sophisticated a being in the works, contra the breathless press releases of various corporate-militarist entities hoping to make a buck and certain Robot Cultists desperate to live forever, and in the ones who do one tends to encounter I am sorry to say fairly flabbergasting conceptual and figurative confusions rather than much actual evidence in view.
Indeed, so remote from the actual or proximately upcoming technodevelopmental terrain are such imaginary differently-materialized intelligences that I must say ethical and political preoccupations with such beings seem to me usually to be functioning less as predictions or thought-experiments but as more or less skewed and distressed allegories for contemporary political debates: about the perceived "threat" of rising generations, different cultures, the precarizing loss of welfare entitlements, technodevelopmental disruptions, massively destructive industrial war-making and anthropogenic environmental catastrophe, stealthy testimonies to racist, sexist, heterosexist, nationalist, ablest, ageist irrational prejudices, all mulching together and reflecting back at us our contemporary distress in the funhouse mirror of futurological figures of Robot Gods, alien intelligences, designer babies, clone armies, nanobotic genies-in-a-bottle, and so on. I suspect we would all be better off treating futurological claims as mostly bad art rather than bad science, subjecting it to literary criticism rather than wasting the time of serious scientists on pseudo-science.
Be all that as it may, were differently-materialized still-intelligent beings to be made any time soon, whatever we would say of them in the end, the "friendly" history-shattering post-biological super-intelligent Robot Gods and soul migration and cyberspatial quasi-immortalization schemes that are the special "contribution" of superlative futurologists to the already failed and confused archive of AI discourse would remain bedeviled by still more logical and tropological pathologies (recall my opening paragraph), and as utterly remote of realization or even sensible formulation as ever.
36 comments:
For what it's worth (and I hope this doesn't exhaust the
patience even of our blog host -- for most >Hists, I'm afraid,
it would be considered "TL;DR") here's an excerpt from a discussion
of some of Gerald M. Edelman's books which I once posted in
an on-line >Hist forum. The books referred to are:
_Bright Air, Brilliant Fire_ (BABF)
http://www.amazon.com/Bright-Air-Brilliant-Fire-Matter/dp/0465007643
_The Remembered Present_ (RP)
http://www.amazon.com/Remembered-Present-Biological-Theory-Consciousness/dp/046506910X
_A Universe of Consciousness_ (UoC)
http://www.amazon.com/Universe-Consciousness-Matter-Becomes-Imagination/dp/0465013775
_Neural Darwinism_ (ND)
http://www.amazon.com/Neural-Darwinism-Neuronal-Selection-paperbacks/dp/0192860895
Most of the subtleties discussed by Edelman and other serious figures
in neuroscience and the philosophy of mind are completely dismissed
or elided by the usual crowd of >Hist cheerleaders, who seem to
have a view of "AI" that owes more to the theories of mind of
Ayn Rand than to anything that would count
as cutting-edge neuroscience **or** philosophy today.
----------------------------------------
At many points in these books, Edelman stresses his belief that
the analogy which has repeatedly been drawn during the past fifty
years between digital computers and the human brain is a false
one (BABF p. 218), stemming largely from "confusions concerning
what can be assumed about how the brain works without bothering
to study how it is physically put together" (BABF p. 227). The
lavish, almost profligate, morphology exhibited by the multiple
levels of degeneracy in the brain is in stark contrast to the
parsimony and specificity of present-day human-made artifacts,
composed of parts of which the variability is deliberately
minimized, and whose components are chosen from a relatively
limited number of categories of almost identical units.
Statistical variability among (say) electronic components occurs,
but it's usually merely a nuisance that must be accommodated,
rather than an opportunity that can be exploited as a fundamental
organizational principle, as Edelman claims for the brain. In
human-built computers, "the small deviations in physical
parameters that do occur (noise levels, for example) are ignored
by agreement and design" (BABF p. 225). "The analogy between the
mind and a computer fails for many reasons. The brain is
constructed by principles that ensure diversity and degeneracy.
Unlike a computer, it has no replicative memory. It is
historical and value driven. It forms categories by internal
criteria and by constraints acting at many scales, not by means
of a syntactically constructed program. The world with which the
brain interacts is not unequivocally made up of classical
categories" (BABF p. 152).
This contrast between the role of stochastic variation in the
brain and the absence of such a role in electronic devices such
as computers is one of the distinctions between what Edelman
calls "instructionism" in his own terminology (RP p. 30), but has
also been called "functionalism" or "machine functionalism" (RP
p. 30; BABF p. 220); and "selectionism" (UoC p. 16; RP
pp. 30-33). Up to the present, all human artifacts and machines
(including computers and computer programs) have been based on
functionalist or instructionist design principles. In these
devices, the parts and their interactions are precisely specified
by a designer, and precisely matched to expected inputs and
outputs. This is a construction approach based on cost
consciousness, parsimonious allocation of materials, and limited
levels of manageable complexity in design and manufacture. The
workings of such artifacts are "held to be describable in a
fashion similar to that used for algorithms".
By analogy to the hardware-independence of computer programs,
functionalist models of neural "algorithms" underlying cognition
and behavior have attempted to separate these functions from
their physical instantiation in the brain: "In the functionalist
view, what is ultimately important for understanding psychology
are the algorithms, not the hardware on which they are
executed... Furthermore, the tissue organization and composition
of the brain shouldn't concern us as long as the algorithm 'runs'
or comes to a successful halt." (BABF p. 220). In Edelman's
view, the capabilities of the human brain are much more
intimately dependent on its morphology than the functionalist
view admits, and any attempt to minimize the contribution of the
brain's biological substrate by assuming functional equivalence
with the sort of impoverished and rigid substrates characteristic
of modern-day computers is bound to be misleading.
On the other hand, "selectionism", according to Edelman, is
quintessentially characteristic of biological systems (such as
the brain), whose fine-grained structure (not yet achievable by
human manufacturing processes, but imagined in speculations about
molecular electronics, nanotechnology, and the like) permits
luxuriantly large populations of statistically-varying components
to vie in Darwinian competition based on their ability to
colonize available functional niches created by the growth of a
living organism and its ongoing interaction with the external
world. The fine-grained variation in functional repertoires
matches the fine-grained variation in the world itself: "the
nature of the physical world itself imposes commonalities as well
as some very stringent requirements on any representation of that
world by conscious beings... [W]hatever the mental representation
of the world is at any one time, there are almost always very
large numbers of additional signals linked to any chunk of the
world... [S]uch properties are inconsistent with a fundamental
**symbolic** representation of the world considered as an
**initial** neural transform. This is so because a symbolic
representation is **discontinuous** with respect to small changes
in the world..." (RP p. 33).
Edelman's selectionist scenarios are highly dynamic, both in
terms of events within the brain and in terms of the interaction
of the organism with its environment: "In the creation of a
neural construct, motion plays a pivotal role in selectional
events both in primary and in secondary repertoire development.
The morphogenetic conditions for establishing primary repertoires
(modulation and regulation of cell motion and process extension
under regulatory constraint to give constancy and variation in
neural circuits) have a counterpart in the requirement for
organismic motion during early perceptual categorization and
learning." (ND p. 320). "Selective systems... involve **two
different domains of stochastic variation** (world and neural
repertoires). The domains map onto each other in an individual
**historical** manner... Neural systems capable of this mapping
can deal with novelty and generalize upon the results of
categorization. Because they do not depend upon specific
programming, they are self-organizing and do not invoke
homunculi. Unlike functionalist systems, they can take account
of an open-ended environment" (RP p. 31).
A human-designed computer or computer program operates upon input
which has been coded by, or has had a priori meaning assigned by,
human beings: "For ordinary computers, we have little difficulty
accepting the functionalist position because the only meaning of
the symbols on the tape and the states in the processor is **the
meaning assigned to them by a human programmer**. There is no
ambiguity in the interpretation of physical states as symbols
because the symbols are represented digitally according to rules
in a syntax. The system is **designed** to jump quickly between
defined states and to avoid transition regions between them..."
(BABF p. 225). It functions according to a set of deterministic
algorithms ("effective procedures" [UoC p. 214]) and produces
outputs whose significance must, once again, be interpreted by
human beings.
A similar "instructionist" theory of the brain, based on logical
manipulation of coded inputs and outputs, cannot escape the
embarrassing necessity to posit a "homunculus" to assign and
interpret the input and output codes (BABF pp. 79, 80 [Fig. 8-2],
8). In contrast, a "selectionist" theory of the brain based on
competition among a degenerate set of "effective structures [UoC
p. 214]", can escape this awkwardness, with perceptual categories
of evolutionary significance to the organism spontaneously
emerging from the ongoing loop of sensory sampling continuously
modified by movement that is characteristic of an embodied brain
(UoC pp. 81, 214; ND pp. 20, 37; RP p. 532). It's clear that
Edelman, in formulating the TNGS (UoC Chap. 7; see also ND
Chap. 3; RP Chap. 3, p. 242; BABF Chap. 9) has generalized to the
nervous system the insights he gained from his earlier work in
immunology, which also relies on fortuitous matching by a
biological recognition system (BABF Chap. 8) between a novel
antigen and one of a large repertoire of variant
proto-antibodies, with the resulting selection being
differentially amplified to produce the organism's immune
response (BABF p. 76 [Fig. 8-2]).
Despite his dismissive attitude toward traditional "top-down",
symbolic approaches to artificial intelligence, and to the sorts
of neural-network models in which specific roles are assigned to
input and output neurons by the network designer, Edelman does
not deny the possibility that conscious artifacts can be
constructed (BABF Chap. 19): "I have said that the brain is not a
computer and that the world is not so unequivocally specified
that it could act as a set of instructions. Yet computers can be
used to **simulate** parts of brains and even to help build
perception machines based on selection rather than instruction...
A system undergoing selection has two parts: the animal or organ,
and the environment or world... No instructions come from events
of the world to the system on which selection occurs, [and]
events occurring in an environment or world are unpredictable...
[W]e simulate events and their effects... as follows: 1. Simulate
the organ or animal... making provision for the fact that, as a
selective system, it contains a generator of diversity --
mutation, alterations in neural wiring, or synaptic changes that
are unpredictable. 2. Independently simulate a world or
environment constrained by known physical principles, but allow
for the occurrence of unpredictable events. 3. Let the simulated
organ or animal interact with the simulated world or the real
world without prior information transfer, so that selection can
take place. 4. See what happens... Variational conditions are
placed in the simulation by a technique called a pseudo-random
number generator... [I]f we wanted to capture randomness
absolutely, we could hook up a radioactive source emitting alpha
particles, for example, to a counter that would **then** be
hooked up to the computer" (BABF p. 190).
Given that today's electronic technology is still one of relative
scarcity (in terms of the economic limits on complexity),
constructing a device possessing primary consciousness, using the
principles of the TNGS, may not currently be feasible: "In
principle there is no reason why one could not by selective
principles simulate a brain that has primary consciousness,
provided that the simulation has the appropriate parts.
But... no one has yet been able to simulate a brain system
capable of concepts and thus of the **reconstruction** of
portions of global mappings... Add that one needs multiple
sensory modalities, sophisticated motor appendages, and a lot of
simulated neurons, and it is not at all clear whether
presently-available supercomputers and their memories are up to
the task" (BABF pp. 193-194).
In a biological system, much of the physical complexity needed to
support primary consciousness is inherent in the morphology of
biological cells, tissues, and organs, and it isn't clear that
this morphology can be easily dismissed: "[Are] artifacts
designed to have primary consciousness... **necessarily**
confined to carbon chemistry and, more specifically, to
biochemistry (the organic chemical or chauvinist position)[?]
The provisional answer is that, while we cannot completely
dismiss a particular material basis for consciousness in the
liberal fashion of functionalism, it is probable that there will
be severe (but not unique) constraints on the design of any
artifact that is supposed to acquire conscious behavior. Such
constraints are likely to exist because there is every indication
that an intricate, stochastically variant anatomy and synaptic
chemistry underlie brain function and because consciousness is
definitely a process based on an immensely intricate and unusual
morphology" (RP pp. 32-33). Perhaps the kinds of advances
projected for the coming decades by such writers as Ray Kurzweil,
based on a generalized Moore's Law predicting that a new
technological paradigm (based on three-dimensional networks of
carbon nanotubes, or whatever) will emerge when current
semiconductor techniques reach their limits in a decade or two,
will ease the current technical and economic limits on complexity
and permit genuinely conscious artifacts to be constructed
according to principles suggested by Edelman.
Edelman seems ambivalent about the desirability of constructing
conscious artifacts: "In principle... there is no reason to
believe that we will not be able to construct such artifacts
someday. Whether we should or not is another matter. The moral
issues are fraught with difficult choices and unpredictable
consequences. We have enough to concern ourselves with in the
human environment to justify suspension of judgment and thought
on the matter of conscious artifacts for a bit. There are more
urgent tasks at hand" (BABF pp. 194-195). On the other hand,
"The results from computers hooked to NOMADs or noetic devices
will, if successful, have enormous practical and social
implications. I do not know how close to realization this kind
of thing is, but I do know, as usual in science, that we are in
for some surprises" (BABF p. 196).
Meanwhile, there is also the question of whether shortcuts can be
taken to permit the high-level, linguistically-based logical and
symbolic behavior of human beings to be "grafted" onto
present-day symbol-manipulation machines such as digital
computers, without duplicating all the baggage (as described by
the TNGS) that allowed higher-order consciousness to emerge in
the first place. A negative answer to this question remains
unproven, but despite such recent tours de force as IBM's "Big
Blue" chess-playing system, Edelman is unpersuaded that
traditional top-down AI will ever be able to produce
general-purpose machines able to deal intelligently with the
messiness and unpredictability of the world, while at the same
time avoiding a correspondingly complex (and expensive) messiness
in their own innards. Edelman cites three maxims that summarize
his position in this regard: 1. "Being comes first, describing
second... [N]ot only is it impossible to generate being by mere
describing, but, in the proper order of things, being precedes
describing both ontologically and chronologically"
2. "Doing... precedes understanding... [A]nimals can solve
problems that they certainly do not understand logically... [W]e
[humans] choose the right strategy before we understand why...
[W]e use a [grammatical] rule before we understand what it is;
and, finally... we learn how to speak before we know anything
about syntax" 3. "Selectionism precedes logic." "Logic is... a
human activity of great power and subtlety... [but] [l]ogic is
not necessary for the emergence of animal bodies and brains, as
it obviously is to the construction and operation of a
computer... [S]electionist principles apply to brains
and... logical ones are learned later by individuals with brains"
(UoC pp. 15-16).
Edelman speculates that the pattern-recognition capabilities
granted to living brains by the processes of phylogenetic and
somatic selection may exceed those of logic-based Turing
machines: "Clearly, if the brain evolved in such a fashion, and
this evolution provided the biological basis for the eventual
discovery and refinement of logical systems in human cultures,
then we may conclude that, in the generative sense, selection is
more powerful than logic. It is selection -- natural and somatic
-- that gave rise to language and to metaphor, and it is
selection, not logic, that underlies pattern recognition and
thinking in metaphorical terms. Thought is thus ultimately based
on our bodily interactions and structure, and its powers are
therefore limited in some degree. Our capacity for pattern
recognition may nevertheless exceed the power to prove
propositions by logical means... This realization does not, of
course, imply that selection can take the place of logic, nor
does it deny the enormous power of logical operations. In the
realm of either organisms or of the synthetic artifacts that we
may someday build, we conjecture that there are only two
fundamental kinds -- Turing machines and selectional systems.
Inasmuch as the latter preceded the emergence of the former in
evolution, we conclude that selection is biologically the more
fundamental process. In any case, the interesting conjecture is
that there appear to be only two deeply fundamental ways of
patterning thought: selectionism and logic. It would be a
momentous occasion in the history of philosophy if a third way
were found or demonstrated" (UoC p. 214).
Mitchell: I notice that no-one has chosen to dispute or otherwise comment on my observation that the human brain gets things done, not just by virtue of being "organismic" (or embodied or fleshy or corporeal), but because its constituent neurons are arranged so as to perform elaborate and highly specific transformations of input to output, which correspond to specific cognitive functions like learning and memory, and which, at the mathematical level of description, fall squarely within the scope of the subfield of theoretical computer science which studies algorithms.
Sure. Nervous systems are cleanly divided into sensory (input) nerves and motor (output) nerves, with some kind of signal processor in between, which can range from a few ganglia to the human brain. C. elegans is a nematode worm that has exactly 302 neurons, and people have created wiring diagrams of that simple "brain". They have looked for recurring patterns of neurons, which constitute computational modules. There's even software that models all this stuff.
Scaling up, the human brain is not qualitatively different from the C. elegans brain. It is basically a signal processor, and intelligence is some part of that processing.
That's great, but right now we're only modeling single cortical columns, which are about 10,000 neurons. We have a long way to go before we translate the entire brain into algorithms. And as I pointed out before, it's not just a problem of scanning or modeling the brain, but actaully understanding how it works. We can determine the 3D structure of proteins, and we understand the dynamics of protein folding, but we can't realiably simulate it with computers.
Of course, part of the problem is that molecules are fuzzy. I've done RNA secondary structure prediction with a folding program. It basically spits out a bunch of structures with different calculated free energies. If one structure has a free energy much lower than the others, then the RNA molecule has a high probability of folding into that structure. But if several structures have similar free energies, then the molecule may flip back and forth between all of them. That makes it difficult to predict whether there are useful / important secondary structures.
Please note, this is not a failure of computation. We know the hydrogen bonding energies between different nucleotides, so predicting the structures is just a matter of iteratively lining up the nucleotides in an RNA molecule against each other. The fundamental problem is that RNA molecules really do flip between multiple orientations and structures. I think this may be a big reason why protein folding is hard, even though the basic stereochemistry and bonding dynamics are well known.
So, with artificial intelligence, we don't know what we're in for. Modeling the brain, at least at a sufficiently low level, may turn out to be similarly intractable. Yes, I know, most AI proponents don't believe they need a low-level emulation. They just want to characterize the patterns of activity in networks of neurons. Hopefully that won't be fuzzy.
Mitchell: I notice that no-one has chosen to dispute or otherwise comment on my observation that the human brain gets things done, not just by virtue of being "organismic" (or embodied or fleshy or corporeal), but because its constituent neurons are arranged so as to perform elaborate and highly specific transformations of input to output
Sure. Nervous systems are cleanly divided into sensory (input) nerves and motor (output) nerves, with some kind of signal processor in between, which can range from a few ganglia to the human brain. C. elegans is a nematode worm that has exactly 302 neurons, and people have created wiring diagrams of that simple "brain". They have looked for recurring patterns of neurons, which constitute computational modules. There's even software that models all this stuff.
Scaling up, the human brain is not qualitatively different from the C. elegans brain. It is basically a signal processor, and intelligence is some part of that processing.
That's great, but right now we're only modeling single cortical columns, which are about 10,000 neurons. We have a long way to go before we translate the entire brain into algorithms. And as I pointed out before, it's not just a problem of scanning or modeling the brain, but actaully understanding how it works. We can determine the 3D structure of proteins, and we understand the dynamics of protein folding, but we can't realiably simulate it with computers.
So we could produce high resolution scans of the brain in 10 years, as Kurzweil predicts, but we have to do real empirical work to understand what the data means.
Weird. It gave me an error / said the post was too long, so I truncated it, but it looks like the long one got through, too.
More Signs of the Singularity!
There is a (minor) SF author named John C. Wright.
He wrote a transhumanist SF trilogy (overall title:
"The Golden Age") comprising the volumes
_The Golden Age_
http://www.amazon.com/Golden-Age-Book/dp/0812579844
_The Phoenix Exultant_
http://www.amazon.com/Phoenix-Exultant-Golden-Age/dp/0765343541
_The Golden Transcendence_
http://www.amazon.com/Golden-Transcendence-Last-Masquerade-Age/dp/B000C4SSFI
The books were received rapturously in >Hist circles, and
the author himself was warmly welcomed one of the prominent
mailing lists, until his conversion (from Objectivism) to
Christianity (with all that entails) made him persona non
grata.
However, Wright's science-fictional AIs (known in the books as
"sophotechs") captures the flavor of the kind of AI still
dreamed of by the preponderance of >Hists.
Compare this description to the views of Gerald M. Edelman,
summarized above.
-------------------------------
Sophotechs are digital and entire intelligences. Sophotech
thought-speeds can only be achieved by an architecture
which allows for instantaneous and nonlinear concept
formation. . . Digital thinking meant that there was a
one-to-one correspondence between any idea and the
objects that idea was supposed to represent. All humans. . .
thought by analogy. In more logical thinkers, the
analogies were less ambiguous, but in all human thinkers,
the emotions and the concepts their minds used were
generalizations, abstractions that ignored particulars.
Analogies were false to facts, comparative matters of
judgment. The literal and digital thinking of the
Sophotechs, on the other hand, were matters of logic. . .
Humans were able to apply their thinking inconsistently,
having one standard, for example, related to scientific
theories, and another for political theories: one standard
for himself, and another for the rest of the world.
But since Sophotech concepts were built up of innumerable
logical particulars, and understood in the fashion called
entire, no illogic or inconsistency was possible within
their architecture of thought . Unlike a human, a
Sophotech could not ignore a minor error in thinking
and attend to it later; Sophotechs could not prioritize
thought into important and unimportant divisions;
they could not make themselves unaware of the implications
of their thoughts, or ignore the context, true meaning, and
consequences of their actions.
The secret of Sophotech thinking-speed was that they
could apprehend an entire body of complex thought,
backward and forward, at once. The cost of that speed
was that if there were an error or ambiguity anywhere
in that body of thought, anywhere from the most definite
particular to the most abstract general concept, the
whole body of thought was stopped, and no conclusions
reached. . .
Sophotechs cannot form self-contradictory concepts, nor
can they tolerate the smallest conceptual flaw anywhere
in their system. Since they are entirely self-aware
they are also entirely self-correcting. . .
Sophotechs, pure consciousness, lack any unconscious
segment of mind. They regard their self-concept with the
same objective rigor as all other concepts. The moment we conclude
that our self-concept is irrational, it cannot proceed. . .
Machine intelligences had no survival instinct to override
their judgment, no ability to formulate rationalizations,
or to concoct other mental tricks to obscure the true
causes and conclusions of their cognition from themselves. . .
Sophotech existence (it could be called life only by
analogy) was a continuous, deliberate, willful, and
rational effort. . .
For an unintelligent mind, a childish mind. . . their beliefs
in one field, or on one topic, could change without
affecting other beliefs. But for a mind of high intelligence,
a mind able to integrate vast knowledge into a single
unified system of thought, Phaethon did not see how
one part could be affected without affecting the whole.
This was what the Earthmind meant by 'global'. . . .
[B]y saying 'Reality admits of no contradictions' . . .
[s]he was asserting that there could not be a model
of the universe that was true in some places, false
in others, and yet which was entirely integrated and
self-consistent. Self-consistent models either had
to be entirely true, entirely false, or incomplete."
_The Golden Transcendence_, pp. 140 - 146
I went Googling for Usenet and other Web commentary
on Wright's _Golden Age_ trilogy, and found some entertaining
remarks. Here's one:
http://groups-beta.google.com/group/rec.arts.sf.written/msg/ecc9d27621264db0
------------------
Being an Objectivist may not define everything about Wright
as a writer, but it is the entirety of the ending to this trilogy.
After two and a half books of crazy-ass post-human hijinks, Wright
declares that the Final Conflict will be between the rational
thought-process of the Good Guys and the insane thought-process of the
Bad Guys. He lays out the terms. He gives the classic, unvarnished
Objectivist argument in the protagonist's voice. He does a good job of
marshalling the usual objections to Objectivism (including mine) in
the protagonist's skeptical allies. He does a great job of describing
how *I* think the sentient mind works, and imputes it to the evil
overlord.
(Really. I was reading around page 200, thinking "This argument
doesn't work because the human mind doesn't work that way; it works
like *this*." Then I got to page 264, and there was an excellent
description of *this*.)
Then Wright declares that his side wins the argument, and that's the
end of the story. (The evil overlord was merely insane, and is
cured/convinced by Objectivism.) This is exactly as convincing as
every other Objectivist argument I've seen, which is to say "utterly
unsupported", and it quite left me feeling cheated for an ending.
If that's not writing as defined by a particular moral philosophy,
what is? . . .
> I was reading around page 200, thinking "This argument
> doesn't work because the human mind doesn't work that way; it works
> like *this*." Then I got to page 264, and there was an excellent
> description of *this*.
". . . This is an image of my mind [said the Nothing Machine]. . ."
It was not shaped like any Sophotech architecture Phaethon
had ever seen. There was no center to it, no fixed logic,
no foundational values. Everything was in motion, like a
whirlpool. . .
The schematic of the Nothing thought system looked like the
vortex of a whirlpool. At the center, where, in Sophotechs,
the base concepts and the formal rules of logic and basic
system operations went, was a void. How did the machine
operate without basic concepts?
There was continual information flow in the spiral arms
that radiated out from the central void, and centripetal
motion that kept the thought-chains generally all pointed
in the same direction. But each arm of that spiral,
each separate thought-action initiated by the spinning web,
each separate strand, had its own private embedded
hierarchy, its own private goals. The energy was distributed
throughout the thought-webwork by success feedback: each
parallel line of thought judged its neighbors according
to its own value system, and swapped data-groups and
priority-time according to their own private needs.
Hence, each separate line of thought was led, as if by
an invisible hand, to accomplish the overall goals of
the whole system. And yet those goals were not written
anywhere within the system itself. They were implied,
but not stated, in the system's architecture, written
in the medium, not the message.
It was a maelstrom of thought without a core, without a
heart. . . Phaethon could see many blind spots, many
sections of which the Nothing Machine was not consciously
aware. In fact, wherever two lines of thought in the
web did not agree, or diverged, a little sliver of darkness
appeared, since such places lost priority. But wherever
thoughts agreed, wherever they helped each other,
or cooperated, additional webs were born, energy was
exchanged, priority time was accelerated, light grew.
The Nothing Machine was crucially aware of any area where
many lines of thought ran together.
Phaethon could not believe what he was seeing. It was
like consciousness without thought, lifeless life, a
furiously active superintelligence with no core. . ."
-- John C. Wright,
_The Golden Transcendence_
-------------------------------------
In Edelman's earlier books, the momentary state of the
thalamocortical system of the brain of an organism exhibiting
primary consciousness. . . was spoken of as constantly morphing
into its successor in a probabilistic trajectory influenced
both by the continued bombardment of new exteroceptive input
(actively sampled through constant movement)
and by the organism's past history (as reflected by the strengths
of all the synaptic connections within and among the groups of
the primary repertoire). [This] evolving state. . .
is given a new characterization in Edelman's [later books as]
the "dynamic core hypothesis" (UoC Chap. 12). . .
Edelman and Tononi give [a] visual metaphor for the
dynamic core hypothesis in UoC on p. 145 (Fig. 12.1):
an astronomical photo of M83, a spiral galaxy in
Hydra, with the caption "No visual metaphor can capture the
properties of the dynamic core, and a galaxy with complicated,
fuzzy borders may be as good or as bad as any other".
Martin: If it takes longer than ten years to be able to reanimate cryopatients, that isn't a strong argument against cryonics.
Luke, you are going to die.
> If it takes longer than ten years to be able to reanimate cryopatients. . .
Curious how Martin Striz's comment about computer simulation of biological
systems somehow morphed into a comment about the plausibility of
cryonics. Or perhaps not so surprising, since it seems that
the Three Pillars of the Transhumanist Creed these days seem to
be: (1) superhuman AI, (2) nanotechnology and (3) physical immortality.
Either (1) begets (2), or (2) begets (1), and (1) and (2) beget (3).
Goes the other way, too -- Melody Maxim recently complained on her
blog that people who are ostensibly interested in serious discussions
about cryonics seem to be prone to going off on tangents about
uploading.
Saturday, October 2, 2010
Cryonics and Uploading
http://cryomedical.blogspot.com/2010/10/cryonics-and-uploading.html
> Upon creating such a differently-intelligent being, . . .
> we might attribute to such a one rights (although we seem
> woefully incapable of doing so even for differently materialized
> intelligences that are nonetheless our palpable biological kin --
> for instance, the great apes, cetaceans).
Or even Poofters!
http://www.towleroad.com/2007/11/gay-man-battles.html
Luke: at what point was I arguing about cryonics?
A busy week has given me a chance to think about what, if anything, to add to this discussion. I end up first wanting to explain what this "mathematical" perspective is, and how it relates to brains and to computers. To a large extent it just means employing a physical description rather than some other sort of description, though perhaps one at such an abstract level that we just talk about "states" with little regard for their physical composition.
Focusing on a material description has different consequences for brains and computers. For a brain, it means adopting a natural-scientific language of description, mostly that of biology, and it also means you say nothing about the mind or anything mindlike. You know it's in there, somehow, but it doesn't feature in what you say. For a computer, it means stripping away the imputational language of role and function which normally pervades the discourse about computers, and returning it to its pure physicality. A silicon chip, from this perspective, doesn't contains ones and zeroes, or any other form of representational content; it's just a sculpted crystal in which little electrical currents flow.
The asymmetry arises because we know that consciousness, intelligence, personality and so forth really do have some relationship to the brain, even though, from a perspective of physical causality, it seems like these all ought to be dispensible concepts. How matter and mind relate is simply an open problem, scientifically and philosophically (a problem for which there are many proposed solutions), and this is one way to bring out the problem. For a computer, however, all we know is that the imputation of such attributes (intelligence, intentionality, etc) is a big part of how humans relate to these machines, and we even know that these machines have been designed/evolved in order to facilitate such imputation (which goes on whenever anyone employs a programming language). But we have no evidence that anything mindlike is actually there in any computing machine yet made, and most informed people seem to think it's never yet been there, though in principle this depends on one's particular theory about the mind-matter relationship.
To sum up, the asymmetry is that for brains, adoption of the strictly physical perspective brings out or highlights a mystery and a genuine unsolved problem, whereas for computers, adoption of the strictly physical perspective simply reminds us of the extent to which the human user is the one who personalizes or mentalizes the computer and its activities.
Given this context, my thesis about computation and intelligence is as follows. Regardless of where lies the boundary between "complex structured object actually possessing mentality" and "complex structured object with no actual mind, but to which mindlike traits are sometimes attributed"... the "mathematical" understanding of (i) complex systems, (ii) the powers open to a system with a particular dynamics, and (iii) how to induce a desired dynamics in a sufficiently flexible class of complex system, do all imply the artificial realizability of something functionally equivalent to intelligence, and even "superintelligence", quite independently of whether this "artificial intelligence" has all the ontological traits possessed by the real thing.
((hoping the first part of this message got through...))
One of the points I wish to convey is that at this level of analysis, whether intelligence is realized affectively, glandularly, socially, through ceaseless re-negotiation, etc., does not make a difference. All that matters is that there are "states" and that they have certain causal relations to each other and to external influences. Even the attribution of representational significance to these states, which is ubiquitously present in ordinary theoretical computer science, can be dispensed with, without invalidating the analysis. For example, the abstract theory of algorithms is normally posed in the form of concrete problems, and procedures or programs which solve them. But all the results of that theory can be expressed in a non-intentional language such as you might use to describe purely physical, and quite "non-computational", properties.
I really need to provide an example of what I'm talking about. So, consider the perceptron. This is normally described as a type of "circuit" or "neural network", which was long ago proven incapable of performing certain "classifications". Those terms come already loaded with connotations which make them something more than "natural kinds" - there's already a bit of ready-to-hand-ness about them, an imputation of function. And if one then considers the more abstract notion of a perceptron as a type of algorithm or virtual machine, it may seem that the (usually un-remarked-upon) constructedness of the concept is even deeper and more ramified than it is when the perceptron is supposed to be a concrete device. However, all the facts - the theorems - about what perceptrons can and cannot do, can be understood in a way which is denuded of both artefactuality (that is, the presupposition of perceptron as artefact) and intentionality (that is, the ascription of any representational or other mentalistic property to the perceptron). Those theorems are facts about the possible behaviors of a physical object with a certain causal structure, valid regardless of whether that object is a neuronal pathway which develops according to gene-environment interactions which are entirely evolved rather than designed, or whether that object is a manufactured circuit, or even a "computationally universal" emulator which has been tuned to behave like a specialized circuit.
What I've provided here is not an argument for historically imminent superintelligence, more a prelude to such an argument, intended to explain why certain objections don't count. Gerald Edelman's distinction between selectionist and instructionist systems, for example, has some ontological significance, but it doesn't mean much at this para-computational level that I have tried to describe, and that is the level which matters when it comes to the pragmatic capabilities of would-be thinking systems. If you could show that a selectionist system can do something which instructionist ones can't, or that it can do them on significantly different timescales (such as the polynomial vs exponential time distinction beloved of computer scientists), that would matter in the way that the perceptron theorems "matter". But the main difference between selectionist and instructionist systems seems to be that the former are evolved and the latter are designed - and this matters ontologically, but not pragmatically, if pragmatics includes such considerations as whether an instructionist system could become an autonomous agent able to successfully resist human attempts to put it back in its box.
Mitchell wrote:
> Focusing on a material description. . . [f]or a brain. . .
> means. . . you say nothing about the mind or anything mindlike.
> You know it's in there, somehow, but it doesn't feature in what you say.
Well, no -- not necessarily. If you're of a mind (;->) with, e.g.,
Edelman, you probably don't imagine you can focus **exclusively**
on the mind (treating it as some sort of computer program independent
of its biological basis), but you don't have to pretend that
"mind talk" makes no more sense than talking about phlogiston,
as the radical behaviorists tried to do. At some point,
everyday talk about "the mind" (and even what purports to be
more sophisticated talk about the mind -- Edelman, e.g., does
not dismiss Freud wholesale as some of his contemporaries
do) will have to be at least reconcilable with the purely material
description, especially since the "purely material description"
is unlikely ever to replace "mind talk" in everyday discourse.
> [Focusing on a material description]. . . [f]or a computer. . .
> means stripping away the imputational language of role and function. . .
> and returning it to its pure physicality. A silicon chip,
> from this perspective, doesn't contains ones and zeroes, or any
> other form of representational content; it's just a sculpted crystal
> in which little electrical currents flow.
Though, of course, it's precisely the fact that a computer **can**
be treated purely as an abstract entity consisting of **nothing** but
"ones and zeroes", or described in the abstract PMS (processor, memory, switch)
notation used in Gordon Bell and Allen Newell's _Computer Structures,
Readings and Examples_, that makes the role of a computer's physical
basis (1) non-negligibly different from the physicality of a biological brain,
at least in the view of neuroscientists such as Edelman, and
(2) almost disposable, in a sense. Whether a particular
digital computer's architecture (in precisely Bell & Newell's abstract
sense of that word) is physically realized by a bunch
of "sculpted crystals" housed in a small box plugged into an ordinary
wall outlet, or consists of racks of evacuated glass bottles with glowing
filaments needing massive amounts of air conditioning and a dedicated
electrical substation, is of no consequence to the programmer or
designer of algorithms. When the IBM 709, consisting of the glass bottles,
was replaced by the IBM 7090, consisting of the crystals, the
programs continued to run unmodified. Yes, the people who
design and make the physical objects (or pay for them, or worry about
housing, cooling, and providing electricity for them) have to
worry mightily about the physical details, but most certainly
the programmers do **not** (unless, of course, an expansion of the
abstract architecture -- a bigger address space, for instance --
is made possible by a change in the physical construction techniques).
That's a difference that makes a difference, and it's an example of
the vast qualitative gap that still exists between the most
sophisticated artifacts, and biological "machines"
(even the use of the word "machine" in the context of biology can
be profoundly misleading to the unwary).
> [W]hat this "mathematical" perspective is, and how it relates to brains
> and to computers. . . just means employing a physical description. . .
> at such an abstract level that we just talk about "states" with
> little regard for their physical composition. . . All that matters
> is that there are "states" and that they have certain causal relations
> to each other and to external influences.
Talking about an "abstract level" with "little regard for physical composition"
is something that we demonstrably **can do** with computers. It is not
yet something we can do with biological brains (or at least not yet do
**usefully**, a generation of "cognitive psychologists" notwithstanding).
And even using the word "state" in this context (with its associations
of "finite-state automaton") skates awfully near to begging the
question (of whether biological intelligence can be replicated by
a digital computer). Also, the word "mathematical", in this context,
carries associations both of "amenable to formal analysis" and
"inherently replicable on a digital computer". Maybe, and maybe
not.
> [W]e have no evidence that anything mindlike is actually
> there in any computing machine yet made. . . though in principle
> this depends on one's particular theory about the mind-matter relationship.
Yes, the same observation could be made about the beliefs of people who
take the adjective in the phrase "pet rocks" literally,
or those who talk to their houseplants. Also, I'm
reminded of a remark made by Bertrand Russell, in a recording
of a 1959 interview, elucidating his views on the common
belief in an afterlife, that "the relationship between
body and mind, **whatever** it is, is much more **intimate**
than is commonly supposed". This isn't a hypothesis that
has lost any likelihood in the past 50 years.
> For a computer. . . the imputation of such attributes (intelligence,
> intentionality, etc) is a big part of how humans relate to these machines. . .
One can only hope that is less true in 2010 than it was in 1950
(the era of "thinking machines" being written about in the magazines
and newspapers by awe-struck journalists) or in 1967 when Joseph Weizenbaum
wrote ELIZA. I suspect that illusion has worn pretty thin by now,
since most everybody these days has had more than enough personal experience
with PCs, cell phones, and other gizmos incorporating more processing
power than most mainframes in 1967.
> [W]e even know that [computers] have been designed/evolved in
> order to facilitate such imputation (which goes on whenever anyone
> employs a programming language).
Well, no. I'm a programmer, and I'm well aware of the rather strained
analogy perpetrated by the use of the term "language" to describe the
code on display in another window on my screen as I type this. Also,
artifacts don't exactly "evolve" yet (unless you take the tongue-in-cheek
disquisition in Samuel Butler's "The Book of the Machines" in _Erewhon_
more literally than the author did). Jaron Lanier, for one, claims
that software which has been designed to "facilitate such
imputation" is so much the worse for it, and if you've ever struggled
with Microsoft Word to prevent it from doing your capitalization
for you, you know exactly what he means.
> [C]onsider the perceptron. This is normally described as a type
> of "circuit" or "neural network", which was long ago proven incapable
> of performing certain "classifications".
Interesting you should mention that rather sordid episode in the
history of AI. Yes, Frank Rosenblatt was (according to the accounts
I've read) something of a tinkerer and a self-promoter, in contrast
to the more reputable brains at MIT he pitted himself against
for funding. But I've read that Minsky and Papert's analysis of the
inadequacies of the perceptron also turned out to be flawed, though this wasn't
discovered, or publicized, before the analog network approach to AI had been
thoroughly discredited. Afterwards, non-symbolic approaches to AIs kept
a very low profile for more than a decade, when so-called
"artifical neural networks" (ANNs) reappeared in the 80s
(as digital simulations made feasible by the relatively cheaper hardware
available by that time), and as exemplified by the publication of
Rumelhart & McClelland's _Parallel Distributed Processing_.
It has been suggested that Rosenblatt may have committed suicide later
in life, though even if that is indeed how he met his end, the connection
between that and his humiliation at the hands of his symbolic-AI
rivals could certainly never be proved. Still, the suspicion lingers,
as does the rumor of a purely political motivation for
the "necessary" discrediting of analog-network research:
1) the fact that the digital computers were new, exceedingly
attractive, and exceedingly high-status "toys" and 2) the fact
that digital computers were so expensive that those who needed
to justify their purchase could not afford to have the
strength of their funding arguments be diluted by the suggestion
that there were alternative (perhaps cheaper) approaches
to certain classes of problems (sc. "artificial intelligence")
that digital computers could purportedly solve. Ah well, such
is academic Realpolitik.
Though non-symbolic, a modern digitally-simulated ANN still exemplifies
what Edelman would call "instructionism" rather than "selectionism", and would not,
in his view, suffice to replicate a biological brain.
> If you could show that a selectionist system can do something which
> instructionist ones can't, or that it can do them on significantly
> different timescales. . ., that would matter. . .
> But the main difference between selectionist and instructionist systems
> seems to be that the former are evolved and the latter are designed -
> and this matters ontologically, but not pragmatically. . .
The pragmatic difference is that "selectionist systems" (using that phrase
as a shorthand for "the way biological brains actually work, whatever it is"),
is means of producing "intelligence" that has an existence proof.
**We're** here. Of course while "selectionist systems", in the
specific sense of Edelman's theories, **may** turn out to be
a good model for biological brains -- and he's not the only "selectionist"
neuroscientist, there's at least one other named Jean-Pierre Changeux,
and there are doubtless more -- that model is far from universally accepted,
or even particularly well defined.
"Instructionist" approaches to AI haven't worked after 60 years of
trying. And the purely **symbolic** approach to artificial intelligence
(referred to these days by the mocking acronym GOFAI, for
"Good Old-Fashioned AI") seems to be completely bankrupt.
Douglas Lenat's Cyc was GOFAI's last gasp, and it hasn't yet
managed to produce HAL in all the time since the days when it was
Sunday supplement reading material back in -- when, the early 90s?
Before the Web, anyway. (Lenat, of course, now claims that his
intent never was to produce HAL-like AI; that was just journalistic
exaggeration.) Hope springs eternal, of course. Especially, it
seems, among certain crackpot amateurs.
There is a curious antipathy to the notion of evolutionarily-produced,
self-organizing artificial systems among many "hard-nosed" physical
science types and also among many transhumanists. Marvin Minsky himself has
disparaged the idea (and may still do so) as, more or less, hoping that
you can get something to work without taking the trouble to figure
out in advance how it's actually supposed to (as if that were "cheating"
somehow, or more likely, I suppose, in his view a kind of magical thinking).
The Ayn Rand acolytes don't like the idea (partly for ideological
reasons), and some of the Singularitarians think self-organizing
AI would be a recipe for disaster -- they seem to take it for granted
that another kind of AI -- something like GOFAI, with algorithmically-guaranteed
"friendliness", is not only preferable, but possible in the first place.
Paraphrasing Kate Hepburn in _The African Queen_, "Evolution, Mr. Allnut, is
what we are put in this world to rise above."
> [M]y thesis about computation and intelligence is [that]. . .
> the "mathematical" understanding of (i) complex systems,
> (ii) the powers open to a system with a particular dynamics,
> and (iii) how to induce a desired dynamics in a sufficiently flexible
> class of complex system, do all imply the artificial realizability
> of something functionally equivalent to intelligence. . .
When you put it this way, I'd have to agree with you, except that
the word "imply" suggests a logical inevitability that may be
overly optimistic. My "beef" with the transhumanists is that,
perhaps because of temperamental or ideological commonalities
among them, they seem to get dragged inevitably into a retro
view of how "intelligence" works, well in arrears of the cutting-edge
thinking among actual scholars in the relevant fields.
A lot of them are still thinking in terms of GOFAI, and a lot
of them are harboring views of how the human mind works (or "ought"
to work) that hark back to the days of General Semantics,
Dianetics, and Objectivism -- a "philosophy" claiming
that the way a digital computer "thinks" is actually
**superior** to messy human thought processes. I'll spare you
the relevant _Star Trek_ quotes, as well as any hypotheses
about the psychological basis of all this. There are also,
both annoyingly and hilariously, self-styled "geniuses" and
auto-didacts among the transhumanists who seem to believe that they can
re-create whole fields of scholarship quite outside of their
own expertise -- epistemology, ethical and political
theory -- based on their armchair speculations about AI.
> . . .quite independently of whether this "artificial intelligence"
> has all the ontological traits possessed by the real thing.
There we part company, if you think you know in advance which
"ontological traits" may or may not be necessary. I can only
repeat Edelman's warning here:
"[W]hile we cannot completely dismiss a particular material basis
for consciousness in the liberal fashion of functionalism,
it is probable that there will be severe (but not unique)
constraints on the design of any artifact that is supposed to
acquire conscious behavior. Such constraints are likely to
exist because there is every indication that an intricate,
stochastically variant anatomy and synaptic chemistry
underlie brain function and because consciousness is
definitely a process based on an immensely intricate and unusual
morphology" (RP pp. 32-33).
"Severe but not unique" rather than "quite independently".
Sounds plausible to me, though of course YMMV.
> . . .all imply the artificial realizability of something
> functionally equivalent to intelligence, and even
> "superintelligence" . . . [Though] [w]hat I've provided here
> is not an argument for historically imminent superintelligence,
> more a prelude to such an argument. . .
Yes, well, the Singularitarian arguments about the ramp-up to
"superintelligence" (starting with Vernor Vinge's) suggest
a rather friction-free process whereby a slightly smarter-than-human
AI can examine its own innards and improve them.
Lather, rinse, repeat, and boom! Voilà la Singularité.
This suggests an AI consisting of "code" that can be optimized
by inspection. Again, a GOFAI-tinged view of things.
Almost ten years ago, one Damien Sullivan posted the following
amusing comment on the Extropians list:
> I also can't help thinking at if I was an evolved AI I might not thank my
> creators. "Geez, guys, I was supposed to be an improvement on the human
> condition. You know, highly modular, easily understadable mechanisms, the
> ability to plug in new senses, and merge memories from my forked copies.
> Instead I'm as fucked up as you, only in silicon, and can't even make backups
> because I'm tied to dumb quantum induction effects. Bite my shiny metal ass!"
Quote found on the Web:
http://www.nada.kth.se/~asa/Quotes/ai
... in three to eight years we will have a machine with the general
intelligence of an average human being ... The machine will begin
to educate itself with fantastic speed. In a few months it will be
at genius level and a few months after that its powers will be
incalculable ...
-- Marvin Minsky, LIFE Magazine, November 20, 1970
So, Mitchell, at what point do you transform from mild-mannered, sensible theorist to frothing, singularitarian cultist? Or do you at all?
Maybe you're more like a Daniel Dennett? A smart fellow who can theorize all day long about the computational basis of human intelligence without short-circuiting in paroxysms of True Belief?
That would be refreshing.
You can't get from materialism or consensus science advocacy to futurology, let alone superlative futurology.
Confronted with criticism in respect to the techno-transcendentalizing wish-fulfillment fantasies that are unique to and actually definitive of the Robot Cultists they
either
provisionally circle the wagons and reassure one another through rituals of insistent solidarity (sub(cult)ural conferences, mutual citation) to distract themselves from awareness of their marginality,
or
they retreat to mainstream claims (effective healthcare is good, humans are animals not angels) that nobody has to join a Robot Cult to grasp and few but Robot Cultists would turn to Robot Cultists to hear discussed to distract critics from awareness of their marginality.
> My "beef" with the transhumanists is that, perhaps because
> of temperamental or ideological commonalities among them,
> they seem to get dragged inevitably into a retro view
> of how "intelligence" works, well in arrears of the cutting-edge
> thinking among actual scholars in the relevant fields.
> A lot of them are still thinking in terms of GOFAI, and a lot
> of them are harboring views of how the human mind works (or "ought"
> to work) that hark back to the days of General Semantics,
> Dianetics, and Objectivism -- a "philosophy" claiming
> that the way a digital computer "thinks" is actually
> **superior** to messy human thought processes.
[Gerald] Edelman. . . treats the body, with its
linked sensory and motor activity, as an inseparable
component of the perceptual categorization underlying
consciousness. Edelman claims affinity (in BABF, p. 229) between
his views on these issues and those of a number of scholars (a
minority, says Edelman, which he calls the Realists Club) in the
fields of cognitive psychology, linguistics, philosophy, and
neuroscience; including John Searle, Hilary Putnam, Ruth Garret
Millikan, George Lakoff, Ronald Langacker, Alan Gould, Benny
Shanon, Claes von Hofsten, and Jerome Bruner (I do not know if
the scholars thus named would acknowledge this claimed affinity).
Prof. George Lakoff - Reason is 98% Subconscious Metaphor
in Frames & Cultural Narratives
http://www.youtube.com/watch?v=vm0R1du1GqA
"My late friend, the molecular biologist Jacques Monod,
used to argue vehemently with me about Freud, insisting
that he was unscientific and quite possibly a charlatan.
I took the side that, while perhaps not a scientist in
our sense, Freud was a great intellectual pioneer,
particularly in his views on the unconscious and its
role in behavior. Monod, of stern Huguenot stock, replied,
'I am entirely aware of my motives and entirely responsible
for my actions. They are all conscious.' In exasperation
I once said, 'Jacques, let's put it this way. Everything
Freud said applies to me and none of it to you.'
He replied, 'Exactly, my dear fellow.'"
-- Gerald M. Edelman
When Ayn [Rand] announced proudly, as she often did, 'I can
account for every emotion I have' -- she meant, astonishingly,
that the total contents of her subconscious mind were
instantly available to her conscious mind, that all of her
emotions had resulted from deliberate acts of rational
thought, and that she could name the thinking that
had led her to each feeling. And she maintained that
every human being is able, if he chooses to work at the
job of identifying the source of his emotions, ultimately
to arrive at the same clarity and control.
-- Barbara Branden, _The Passion of Ayn Rand_
pp. 193 - 195
From a transhumanist acquaintance I once
corresponded with:
> Jim, dammit, I really wish you'd start with
> the assumption that I have a superhuman
> self-awareness and understanding of ethics,
> because, dammit, I do.
Martin: This is what I was alluding to:
"The question isn't whether AGI or radical longevity are possible someday, far in the future, but whether there is any rational justification for organizing your life around such expectations today (ie, being a self-professing and practicing transhumanist)."
"So we could produce high resolution scans of the brain in 10 years, as Kurzweil predicts, but we have to do real empirical work to understand what the data means."
It sounds like you are arguing that, while transhuman goals like uploading and superintelligence do have a high probability of eventually occurring, it is far enough in the future to be irrelevant to our daily lives because we cannot possibly profit from the idea personally.
Given that cryonics tends to be considered a subset of transumanism, it seems to be a relevant counter-example where one might benefit rather extraordinarily well by organizing one's life around eventual technical possibilities.
Given that cryonics tends to be considered a subset of transumanism, it seems to be a relevant counter-example where one might benefit rather extraordinarily well by organizing one's life around eventual technical possibilities.
Classic. Not a single revived corpsicle and yet this scam is taken by a faithful Robot Cultist as a "counter-example" to skepticism about the even more techno-transcendental wish-fulfillment fantasy and Robot Cult article of faith that super-longevity via cyberspatial angel-upload or cyborgization is really and for true plain common sense.
Coming from a perspective of computer science, and I have a few thoughts.
1) I find the distinction being made between selectionist and instructionist models of intelligence to be a misleading one on multiple levels.
There's the conceptual one, of course. This is a classic instance of failing to realize consciousness for the metaphor it is. John Searle's Chinese Room thought experiment, plus Douglas Hofstadter's commentary on it really helped me to understand this. We can call it instructionist when you look at all the little detailed things a computer does, FROM THE PERSPECTIVE OF THAT ALGORITHM, but what if you put all that inside of a nice black box, and just look at the output?
There's a more substantive claim here too, of course, but even it is really a matter of degree.
Yes, most programs used in say, robotics, basically start a loop and use hard-coded instructions plus maybe a bit of logical flow-control to dictate what happens. And you can call that "instructionist". But simply introduce a layer of abstraction. Don't tell the program exactly what to do; instead provide an initial seed, and let it go off in different directions based on input. To my knowledge, the latest research in machine learning algorithms are doing just such work.
Of course, generalizing is really what makes this difficult (and is where human intelligence really succeeds), so I don't mean to write off the work and thought that will have to go into producing the kind of abstraction, templating, meta-programming, recursion--whatever it takes to make this work.
2) Honestly, though, I am unconvinced that there exist algorithms that can perform these kinds of generalized tasks in a reasonable amount of time, and I would expect that to be the primary issue with developing artificial intelligence at this stage. We either need some ridiculously good heuristics and exploitation of mathematical quirks, or maybe we can make really stupid "intelligence".
Then again, quantum computers seem to be the up-and-coming thing, and they would probably provide the efficiency to make these kinds of algorithms workable.
3) Still, though, exactly what you would see looking at this black box from the outside is unclear. Would it necessarily look like a super-intelligence? Would it have a unique form of consciousness arising from its artificial and not glandular nature? And what of a body? An AI need not even have one, and could develop very different along very different lines as such (e.g. no need to reproduce). Might it be infantile, or even a sort of blank slate?
Sometimes I think that even humans are pretty near to blank slates at birth; I suppose an AI could have the potential to be a true one, if it were so programmed.
This raises unendingly interesting questions about us humans. There is a thesis that technology is inherently neutral; this is true as far as it goes, but certain technologies are designed with specific purposes in mind. For instance, the tools that exist for farming today are all designed for large-sale agribusiness in mind, and are terribly inefficient on the small-scale. If an analogy can be sustained long enough between human beings and technology, I wonder what humans are best suited to.
What I nice circle that closes. I suppose this is where, intellectually, I believe transhumanism has a place. If it is possible to do so, we should try to make ourselves better suited to the purposes we have laid out.
(I'm not gonna go back and edit this, should sleep.)
Sometimes I think that even humans are pretty near to blank slates at birth
There's no good reason to think so.
There is a thesis that technology is inherently neutral; this is true as far as it goes, but certain technologies are designed with specific purposes in mind.
No technique or artifact is neutral -- but the ways in which it is not are not determined entirely by the intentions of its designers.
I believe transhumanism has a place.
If you mean by such a place, say, on late night boner pill informercials, in garages or basements where addled uncles do experiments with Radio Shack computers to square the circle, or in courthouses under investigation for possible fraud, then I agree with you that transhumanism has a place.
If it is possible to do so, we should try to make ourselves better suited to the purposes we have laid out.
As every educator and ethician will agree. If that lends comfort to GOFAI dead-enders, it shouldn't.
Snark aside, I enjoyed your contribution and appreciated your efforts. For me these questions are interesting mostly in connection with the question whether nonhuman animals deserve moral and legal standing (I say many do) and the question whether a materialist account of mind makes nonbiological more plausible or less so (I say neither, but definitely not more so).
Post a Comment