tag:blogger.com,1999:blog-5956838.post3225726903705472632..comments2023-11-22T01:14:54.298-08:00Comments on amor mundi: Of Differently Intelligent BeingsDale Carricohttp://www.blogger.com/profile/02811055279887722298noreply@blogger.comBlogger36125tag:blogger.com,1999:blog-5956838.post-89647516342667291372013-08-26T02:41:13.234-07:002013-08-26T02:41:13.234-07:00Sometimes I think that even humans are pretty near...<i>Sometimes I think that even humans are pretty near to blank slates at birth</i><br /><br />There's no good reason to think so.<br /><br /><i>There is a thesis that technology is inherently neutral; this is true as far as it goes, but certain technologies are designed with specific purposes in mind.</i><br /><br />No technique or artifact is neutral -- but the ways in which it is not are not determined entirely by the intentions of its designers.<br /><br /><i>I believe transhumanism has a place.</i> <br /><br />If you mean by such a place, say, on late night boner pill informercials, in garages or basements where addled uncles do experiments with Radio Shack computers to square the circle, or in courthouses under investigation for possible fraud, then I agree with you that transhumanism has a place.<br /><br /><i>If it is possible to do so, we should try to make ourselves better suited to the purposes we have laid out.</i><br /><br />As every educator and ethician will agree. If that lends comfort to GOFAI dead-enders, it shouldn't.<br /><br />Snark aside, I enjoyed your contribution and appreciated your efforts. For me these questions are interesting mostly in connection with the question whether nonhuman animals deserve moral and legal standing (I say many do) and the question whether a materialist account of mind makes nonbiological more plausible or less so (I say neither, but definitely not more so).Dale Carricohttps://www.blogger.com/profile/02811055279887722298noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-51653453814951227032013-08-26T01:31:10.423-07:002013-08-26T01:31:10.423-07:00Coming from a perspective of computer science, and...Coming from a perspective of computer science, and I have a few thoughts.<br /><br />1) I find the distinction being made between selectionist and instructionist models of intelligence to be a misleading one on multiple levels.<br /><br />There's the conceptual one, of course. This is a classic instance of failing to realize consciousness for the metaphor it is. John Searle's Chinese Room thought experiment, plus Douglas Hofstadter's commentary on it really helped me to understand this. We can call it instructionist when you look at all the little detailed things a computer does, FROM THE PERSPECTIVE OF THAT ALGORITHM, but what if you put all that inside of a nice black box, and just look at the output? <br /><br />There's a more substantive claim here too, of course, but even it is really a matter of degree. <br /><br />Yes, most programs used in say, robotics, basically start a loop and use hard-coded instructions plus maybe a bit of logical flow-control to dictate what happens. And you can call that "instructionist". But simply introduce a layer of abstraction. Don't tell the program exactly what to do; instead provide an initial seed, and let it go off in different directions based on input. To my knowledge, the latest research in machine learning algorithms are doing just such work. <br /><br />Of course, generalizing is really what makes this difficult (and is where human intelligence really succeeds), so I don't mean to write off the work and thought that will have to go into producing the kind of abstraction, templating, meta-programming, recursion--whatever it takes to make this work. <br /><br />2) Honestly, though, I am unconvinced that there exist algorithms that can perform these kinds of generalized tasks in a reasonable amount of time, and I would expect that to be the primary issue with developing artificial intelligence at this stage. We either need some ridiculously good heuristics and exploitation of mathematical quirks, or maybe we can make really stupid "intelligence". <br /><br />Then again, quantum computers seem to be the up-and-coming thing, and they would probably provide the efficiency to make these kinds of algorithms workable.<br /><br />3) Still, though, exactly what you would see looking at this black box from the outside is unclear. Would it necessarily look like a super-intelligence? Would it have a unique form of consciousness arising from its artificial and not glandular nature? And what of a body? An AI need not even have one, and could develop very different along very different lines as such (e.g. no need to reproduce). Might it be infantile, or even a sort of blank slate?<br /><br />Sometimes I think that even humans are pretty near to blank slates at birth; I suppose an AI could have the potential to be a true one, if it were so programmed. <br /><br />This raises unendingly interesting questions about us humans. There is a thesis that technology is inherently neutral; this is true as far as it goes, but certain technologies are designed with specific purposes in mind. For instance, the tools that exist for farming today are all designed for large-sale agribusiness in mind, and are terribly inefficient on the small-scale. If an analogy can be sustained long enough between human beings and technology, I wonder what humans are best suited to.<br /><br />What I nice circle that closes. I suppose this is where, intellectually, I believe transhumanism has a place. If it is possible to do so, we should try to make ourselves better suited to the purposes we have laid out.<br /><br />(I'm not gonna go back and edit this, should sleep.)Brian Everett Petersonhttps://www.blogger.com/profile/03772128598027120916noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-79752993394349306332010-10-31T12:00:08.652-07:002010-10-31T12:00:08.652-07:00Given that cryonics tends to be considered a subse...<i>Given that cryonics tends to be considered a subset of transumanism, it seems to be a relevant counter-example where one might benefit rather extraordinarily well by organizing one's life around eventual technical possibilities.</i><br /><br />Classic. Not a single revived corpsicle and yet this scam is taken by a faithful Robot Cultist as a "counter-example" to skepticism about the even more techno-transcendental wish-fulfillment fantasy and Robot Cult article of faith that super-longevity via cyberspatial angel-upload or cyborgization is really and for true plain common sense.Dale Carricohttps://www.blogger.com/profile/02811055279887722298noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-71317543947307420172010-10-31T11:47:43.958-07:002010-10-31T11:47:43.958-07:00Martin: This is what I was alluding to:
"The...Martin: This is what I was alluding to:<br /><br /><i>"The question isn't whether AGI or radical longevity are possible someday, far in the future, but whether there is any rational justification for organizing your life around such expectations today (ie, being a self-professing and practicing transhumanist).</i>"<br /><br />"<i>So we could produce high resolution scans of the brain in 10 years, as Kurzweil predicts, but we have to do real empirical work to understand what the data means.</i>"<br /><br />It sounds like you are arguing that, while transhuman goals like uploading and superintelligence do have a high probability of eventually occurring, it is far enough in the future to be irrelevant to our daily lives because we cannot possibly profit from the idea personally.<br /><br />Given that cryonics tends to be considered a subset of transumanism, it seems to be a relevant counter-example where one might benefit rather extraordinarily well by organizing one's life around eventual technical possibilities.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5956838.post-14221443959982175292010-10-31T09:20:09.261-07:002010-10-31T09:20:09.261-07:00> My "beef" with the transhumanists i...> My "beef" with the transhumanists is that, perhaps because<br />> of temperamental or ideological commonalities among them,<br />> they seem to get dragged inevitably into a retro view<br />> of how "intelligence" works, well in arrears of the cutting-edge<br />> thinking among actual scholars in the relevant fields.<br />> A lot of them are still thinking in terms of GOFAI, and a lot<br />> of them are harboring views of how the human mind works (or "ought"<br />> to work) that hark back to the days of General Semantics,<br />> Dianetics, and Objectivism -- a "philosophy" claiming<br />> that the way a digital computer "thinks" is actually<br />> **superior** to messy human thought processes.<br /><br />[Gerald] Edelman. . . treats the body, with its<br />linked sensory and motor activity, as an inseparable<br />component of the perceptual categorization underlying<br />consciousness. Edelman claims affinity (in BABF, p. 229) between<br />his views on these issues and those of a number of scholars (a<br />minority, says Edelman, which he calls the Realists Club) in the<br />fields of cognitive psychology, linguistics, philosophy, and<br />neuroscience; including John Searle, Hilary Putnam, Ruth Garret<br />Millikan, George Lakoff, Ronald Langacker, Alan Gould, Benny<br />Shanon, Claes von Hofsten, and Jerome Bruner (I do not know if<br />the scholars thus named would acknowledge this claimed affinity).<br /><br /><br />Prof. George Lakoff - Reason is 98% Subconscious Metaphor<br />in Frames & Cultural Narratives<br />http://www.youtube.com/watch?v=vm0R1du1GqA<br /><br /><br />"My late friend, the molecular biologist Jacques Monod,<br />used to argue vehemently with me about Freud, insisting<br />that he was unscientific and quite possibly a charlatan.<br />I took the side that, while perhaps not a scientist in<br />our sense, Freud was a great intellectual pioneer,<br />particularly in his views on the unconscious and its<br />role in behavior. Monod, of stern Huguenot stock, replied,<br />'I am entirely aware of my motives and entirely responsible<br />for my actions. They are all conscious.' In exasperation<br />I once said, 'Jacques, let's put it this way. Everything<br />Freud said applies to me and none of it to you.'<br />He replied, 'Exactly, my dear fellow.'"<br /><br />-- Gerald M. Edelman<br /><br /><br />When Ayn [Rand] announced proudly, as she often did, 'I can<br />account for every emotion I have' -- she meant, astonishingly,<br />that the total contents of her subconscious mind were<br />instantly available to her conscious mind, that all of her<br />emotions had resulted from deliberate acts of rational<br />thought, and that she could name the thinking that<br />had led her to each feeling. And she maintained that<br />every human being is able, if he chooses to work at the<br />job of identifying the source of his emotions, ultimately<br />to arrive at the same clarity and control.<br /><br />-- Barbara Branden, _The Passion of Ayn Rand_<br />pp. 193 - 195<br /><br /><br />From a transhumanist acquaintance I once<br />corresponded with:<br /><br />> Jim, dammit, I really wish you'd start with<br />> the assumption that I have a superhuman<br />> self-awareness and understanding of ethics,<br />> because, dammit, I do.jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-42165918746429034992010-10-30T10:55:14.101-07:002010-10-30T10:55:14.101-07:00You can't get from materialism or consensus sc...You can't get from materialism or consensus science advocacy to futurology, let alone superlative futurology.<br /><br />Confronted with criticism in respect to the techno-transcendentalizing wish-fulfillment fantasies that are unique to and actually definitive of the Robot Cultists they <br /><br /><i>either</i> <br /><br />provisionally circle the wagons and reassure one another through rituals of insistent solidarity (sub(cult)ural conferences, mutual citation) to distract themselves from awareness of their marginality, <br /><br /><i>or</i> <br /><br />they retreat to mainstream claims (effective healthcare is good, humans are animals not angels) that nobody has to join a Robot Cult to grasp and few but Robot Cultists would turn to Robot Cultists to hear discussed to distract critics from awareness of their marginality.Dale Carricohttps://www.blogger.com/profile/02811055279887722298noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-82064472352210657992010-10-29T21:47:14.245-07:002010-10-29T21:47:14.245-07:00So, Mitchell, at what point do you transform from ...So, Mitchell, at what point do you transform from mild-mannered, sensible theorist to frothing, singularitarian cultist? Or do you at all?<br /><br />Maybe you're more like a Daniel Dennett? A smart fellow who can theorize all day long about the computational basis of human intelligence without short-circuiting in paroxysms of True Belief?<br /><br />That would be refreshing.Impertinent Weaselhttps://www.blogger.com/profile/16641434724740469395noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-1621326519520018052010-10-29T21:07:06.967-07:002010-10-29T21:07:06.967-07:00Quote found on the Web:
http://www.nada.kth.se/~a...Quote found on the Web:<br /><br />http://www.nada.kth.se/~asa/Quotes/ai<br /><br />... in three to eight years we will have a machine with the general<br />intelligence of an average human being ... The machine will begin<br />to educate itself with fantastic speed. In a few months it will be<br />at genius level and a few months after that its powers will be<br />incalculable ...<br /><br />-- Marvin Minsky, LIFE Magazine, November 20, 1970jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-25659166318498164532010-10-29T09:24:52.775-07:002010-10-29T09:24:52.775-07:00> . . .all imply the artificial realizability o...> . . .all imply the artificial realizability of something<br />> functionally equivalent to intelligence, and even<br />> "superintelligence" . . . [Though] [w]hat I've provided here<br />> is not an argument for historically imminent superintelligence,<br />> more a prelude to such an argument. . .<br /><br />Yes, well, the Singularitarian arguments about the ramp-up to<br />"superintelligence" (starting with Vernor Vinge's) suggest<br />a rather friction-free process whereby a slightly smarter-than-human<br />AI can examine its own innards and improve them.<br />Lather, rinse, repeat, and boom! Voilà la Singularité.<br />This suggests an AI consisting of "code" that can be optimized<br />by inspection. Again, a GOFAI-tinged view of things.<br /><br />Almost ten years ago, one Damien Sullivan posted the following<br />amusing comment on the Extropians list:<br /><br />> I also can't help thinking at if I was an evolved AI I might not thank my <br />> creators. "Geez, guys, I was supposed to be an improvement on the human <br />> condition. You know, highly modular, easily understadable mechanisms, the <br />> ability to plug in new senses, and merge memories from my forked copies. <br />> Instead I'm as fucked up as you, only in silicon, and can't even make backups <br />> because I'm tied to dumb quantum induction effects. Bite my shiny metal ass!"jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-40223778409083515322010-10-29T09:24:14.833-07:002010-10-29T09:24:14.833-07:00> [M]y thesis about computation and intelligenc...> [M]y thesis about computation and intelligence is [that]. . .<br />> the "mathematical" understanding of (i) complex systems,<br />> (ii) the powers open to a system with a particular dynamics,<br />> and (iii) how to induce a desired dynamics in a sufficiently flexible<br />> class of complex system, do all imply the artificial realizability<br />> of something functionally equivalent to intelligence. . .<br /><br />When you put it this way, I'd have to agree with you, except that<br />the word "imply" suggests a logical inevitability that may be<br />overly optimistic. My "beef" with the transhumanists is that,<br />perhaps because of temperamental or ideological commonalities<br />among them, they seem to get dragged inevitably into a retro<br />view of how "intelligence" works, well in arrears of the cutting-edge<br />thinking among actual scholars in the relevant fields.<br />A lot of them are still thinking in terms of GOFAI, and a lot<br />of them are harboring views of how the human mind works (or "ought"<br />to work) that hark back to the days of General Semantics,<br />Dianetics, and Objectivism -- a "philosophy" claiming<br />that the way a digital computer "thinks" is actually<br />**superior** to messy human thought processes. I'll spare you<br />the relevant _Star Trek_ quotes, as well as any hypotheses<br />about the psychological basis of all this. There are also,<br />both annoyingly and hilariously, self-styled "geniuses" and<br />auto-didacts among the transhumanists who seem to believe that they can<br />re-create whole fields of scholarship quite outside of their<br />own expertise -- epistemology, ethical and political<br />theory -- based on their armchair speculations about AI.<br /><br />> . . .quite independently of whether this "artificial intelligence"<br />> has all the ontological traits possessed by the real thing.<br /><br />There we part company, if you think you know in advance which<br />"ontological traits" may or may not be necessary. I can only<br />repeat Edelman's warning here:<br /><br />"[W]hile we cannot completely dismiss a particular material basis<br />for consciousness in the liberal fashion of functionalism,<br />it is probable that there will be severe (but not unique)<br />constraints on the design of any artifact that is supposed to<br />acquire conscious behavior. Such constraints are likely to<br />exist because there is every indication that an intricate,<br />stochastically variant anatomy and synaptic chemistry<br />underlie brain function and because consciousness is<br />definitely a process based on an immensely intricate and unusual<br />morphology" (RP pp. 32-33).<br /><br />"Severe but not unique" rather than "quite independently".<br />Sounds plausible to me, though of course YMMV.jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-91059036574632225442010-10-29T09:23:18.197-07:002010-10-29T09:23:18.197-07:00> If you could show that a selectionist system ...> If you could show that a selectionist system can do something which<br />> instructionist ones can't, or that it can do them on significantly<br />> different timescales. . ., that would matter. . .<br />> But the main difference between selectionist and instructionist systems<br />> seems to be that the former are evolved and the latter are designed -<br />> and this matters ontologically, but not pragmatically. . .<br /><br />The pragmatic difference is that "selectionist systems" (using that phrase<br />as a shorthand for "the way biological brains actually work, whatever it is"),<br />is means of producing "intelligence" that has an existence proof.<br />**We're** here. Of course while "selectionist systems", in the<br />specific sense of Edelman's theories, **may** turn out to be<br />a good model for biological brains -- and he's not the only "selectionist"<br />neuroscientist, there's at least one other named Jean-Pierre Changeux,<br />and there are doubtless more -- that model is far from universally accepted,<br />or even particularly well defined.<br /><br />"Instructionist" approaches to AI haven't worked after 60 years of<br />trying. And the purely **symbolic** approach to artificial intelligence<br />(referred to these days by the mocking acronym GOFAI, for<br />"Good Old-Fashioned AI") seems to be completely bankrupt.<br />Douglas Lenat's Cyc was GOFAI's last gasp, and it hasn't yet<br />managed to produce HAL in all the time since the days when it was<br />Sunday supplement reading material back in -- when, the early 90s? <br />Before the Web, anyway. (Lenat, of course, now claims that his<br />intent never was to produce HAL-like AI; that was just journalistic<br />exaggeration.) Hope springs eternal, of course. Especially, it<br />seems, among certain crackpot amateurs.<br /><br />There is a curious antipathy to the notion of evolutionarily-produced,<br />self-organizing artificial systems among many "hard-nosed" physical<br />science types and also among many transhumanists. Marvin Minsky himself has<br />disparaged the idea (and may still do so) as, more or less, hoping that<br />you can get something to work without taking the trouble to figure<br />out in advance how it's actually supposed to (as if that were "cheating"<br />somehow, or more likely, I suppose, in his view a kind of magical thinking).<br />The Ayn Rand acolytes don't like the idea (partly for ideological<br />reasons), and some of the Singularitarians think self-organizing<br />AI would be a recipe for disaster -- they seem to take it for granted<br />that another kind of AI -- something like GOFAI, with algorithmically-guaranteed<br />"friendliness", is not only preferable, but possible in the first place.<br />Paraphrasing Kate Hepburn in _The African Queen_, "Evolution, Mr. Allnut, is<br />what we are put in this world to rise above."jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-35624647223043174692010-10-29T09:22:02.262-07:002010-10-29T09:22:02.262-07:00> [C]onsider the perceptron. This is normally d...> [C]onsider the perceptron. This is normally described as a type<br />> of "circuit" or "neural network", which was long ago proven incapable<br />> of performing certain "classifications".<br /><br />Interesting you should mention that rather sordid episode in the<br />history of AI. Yes, Frank Rosenblatt was (according to the accounts<br />I've read) something of a tinkerer and a self-promoter, in contrast<br />to the more reputable brains at MIT he pitted himself against<br />for funding. But I've read that Minsky and Papert's analysis of the<br />inadequacies of the perceptron also turned out to be flawed, though this wasn't<br />discovered, or publicized, before the analog network approach to AI had been<br />thoroughly discredited. Afterwards, non-symbolic approaches to AIs kept<br />a very low profile for more than a decade, when so-called<br />"artifical neural networks" (ANNs) reappeared in the 80s<br />(as digital simulations made feasible by the relatively cheaper hardware<br />available by that time), and as exemplified by the publication of<br />Rumelhart & McClelland's _Parallel Distributed Processing_.<br /><br />It has been suggested that Rosenblatt may have committed suicide later<br />in life, though even if that is indeed how he met his end, the connection<br />between that and his humiliation at the hands of his symbolic-AI<br />rivals could certainly never be proved. Still, the suspicion lingers,<br />as does the rumor of a purely political motivation for<br />the "necessary" discrediting of analog-network research:<br />1) the fact that the digital computers were new, exceedingly<br />attractive, and exceedingly high-status "toys" and 2) the fact<br />that digital computers were so expensive that those who needed<br />to justify their purchase could not afford to have the<br />strength of their funding arguments be diluted by the suggestion<br />that there were alternative (perhaps cheaper) approaches<br />to certain classes of problems (sc. "artificial intelligence")<br />that digital computers could purportedly solve. Ah well, such<br />is academic Realpolitik.<br /><br />Though non-symbolic, a modern digitally-simulated ANN still exemplifies<br />what Edelman would call "instructionism" rather than "selectionism", and would not,<br />in his view, suffice to replicate a biological brain.jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-37032272508226461842010-10-29T09:20:11.798-07:002010-10-29T09:20:11.798-07:00> [W]hat this "mathematical" perspect...> [W]hat this "mathematical" perspective is, and how it relates to brains<br />> and to computers. . . just means employing a physical description. . .<br />> at such an abstract level that we just talk about "states" with<br />> little regard for their physical composition. . . All that matters<br />> is that there are "states" and that they have certain causal relations<br />> to each other and to external influences.<br /><br />Talking about an "abstract level" with "little regard for physical composition"<br />is something that we demonstrably **can do** with computers. It is not<br />yet something we can do with biological brains (or at least not yet do<br />**usefully**, a generation of "cognitive psychologists" notwithstanding).<br /><br />And even using the word "state" in this context (with its associations<br />of "finite-state automaton") skates awfully near to begging the<br />question (of whether biological intelligence can be replicated by<br />a digital computer). Also, the word "mathematical", in this context,<br />carries associations both of "amenable to formal analysis" and<br />"inherently replicable on a digital computer". Maybe, and maybe<br />not.<br /><br />> [W]e have no evidence that anything mindlike is actually<br />> there in any computing machine yet made. . . though in principle<br />> this depends on one's particular theory about the mind-matter relationship.<br /><br />Yes, the same observation could be made about the beliefs of people who<br />take the adjective in the phrase "pet rocks" literally,<br />or those who talk to their houseplants. Also, I'm<br />reminded of a remark made by Bertrand Russell, in a recording<br />of a 1959 interview, elucidating his views on the common<br />belief in an afterlife, that "the relationship between<br />body and mind, **whatever** it is, is much more **intimate**<br />than is commonly supposed". This isn't a hypothesis that<br />has lost any likelihood in the past 50 years.<br /><br />> For a computer. . . the imputation of such attributes (intelligence,<br />> intentionality, etc) is a big part of how humans relate to these machines. . .<br /><br />One can only hope that is less true in 2010 than it was in 1950<br />(the era of "thinking machines" being written about in the magazines<br />and newspapers by awe-struck journalists) or in 1967 when Joseph Weizenbaum<br />wrote ELIZA. I suspect that illusion has worn pretty thin by now,<br />since most everybody these days has had more than enough personal experience<br />with PCs, cell phones, and other gizmos incorporating more processing<br />power than most mainframes in 1967.<br /><br />> [W]e even know that [computers] have been designed/evolved in<br />> order to facilitate such imputation (which goes on whenever anyone<br />> employs a programming language).<br /><br />Well, no. I'm a programmer, and I'm well aware of the rather strained<br />analogy perpetrated by the use of the term "language" to describe the<br />code on display in another window on my screen as I type this. Also,<br />artifacts don't exactly "evolve" yet (unless you take the tongue-in-cheek<br />disquisition in Samuel Butler's "The Book of the Machines" in _Erewhon_<br />more literally than the author did). Jaron Lanier, for one, claims<br />that software which has been designed to "facilitate such<br />imputation" is so much the worse for it, and if you've ever struggled<br />with Microsoft Word to prevent it from doing your capitalization<br />for you, you know exactly what he means.jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-81508535361169874142010-10-29T09:18:38.874-07:002010-10-29T09:18:38.874-07:00Mitchell wrote:
> Focusing on a material descr...Mitchell wrote:<br /><br />> Focusing on a material description. . . [f]or a brain. . .<br />> means. . . you say nothing about the mind or anything mindlike.<br />> You know it's in there, somehow, but it doesn't feature in what you say.<br /><br />Well, no -- not necessarily. If you're of a mind (;->) with, e.g.,<br />Edelman, you probably don't imagine you can focus **exclusively**<br />on the mind (treating it as some sort of computer program independent<br />of its biological basis), but you don't have to pretend that<br />"mind talk" makes no more sense than talking about phlogiston,<br />as the radical behaviorists tried to do. At some point,<br />everyday talk about "the mind" (and even what purports to be<br />more sophisticated talk about the mind -- Edelman, e.g., does<br />not dismiss Freud wholesale as some of his contemporaries<br />do) will have to be at least reconcilable with the purely material<br />description, especially since the "purely material description"<br />is unlikely ever to replace "mind talk" in everyday discourse.<br /><br />> [Focusing on a material description]. . . [f]or a computer. . .<br />> means stripping away the imputational language of role and function. . .<br />> and returning it to its pure physicality. A silicon chip,<br />> from this perspective, doesn't contains ones and zeroes, or any<br />> other form of representational content; it's just a sculpted crystal<br />> in which little electrical currents flow.<br /><br />Though, of course, it's precisely the fact that a computer **can**<br />be treated purely as an abstract entity consisting of **nothing** but<br />"ones and zeroes", or described in the abstract PMS (processor, memory, switch)<br />notation used in Gordon Bell and Allen Newell's _Computer Structures,<br />Readings and Examples_, that makes the role of a computer's physical<br />basis (1) non-negligibly different from the physicality of a biological brain,<br />at least in the view of neuroscientists such as Edelman, and<br />(2) almost disposable, in a sense. Whether a particular<br />digital computer's architecture (in precisely Bell & Newell's abstract<br />sense of that word) is physically realized by a bunch<br />of "sculpted crystals" housed in a small box plugged into an ordinary<br />wall outlet, or consists of racks of evacuated glass bottles with glowing<br />filaments needing massive amounts of air conditioning and a dedicated<br />electrical substation, is of no consequence to the programmer or<br />designer of algorithms. When the IBM 709, consisting of the glass bottles,<br />was replaced by the IBM 7090, consisting of the crystals, the<br />programs continued to run unmodified. Yes, the people who<br />design and make the physical objects (or pay for them, or worry about<br />housing, cooling, and providing electricity for them) have to<br />worry mightily about the physical details, but most certainly<br />the programmers do **not** (unless, of course, an expansion of the<br />abstract architecture -- a bigger address space, for instance --<br />is made possible by a change in the physical construction techniques).<br /><br />That's a difference that makes a difference, and it's an example of<br />the vast qualitative gap that still exists between the most<br />sophisticated artifacts, and biological "machines"<br />(even the use of the word "machine" in the context of biology can<br />be profoundly misleading to the unwary).jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-66950148244002063412010-10-28T00:23:43.086-07:002010-10-28T00:23:43.086-07:00((hoping the first part of this message got throug...((hoping the first part of this message got through...))<br /><br />One of the points I wish to convey is that at this level of analysis, whether intelligence is realized affectively, glandularly, socially, through ceaseless re-negotiation, etc., does not make a difference. All that matters is that there are "states" and that they have certain causal relations to each other and to external influences. Even the attribution of representational significance to these states, which is ubiquitously present in ordinary theoretical computer science, can be dispensed with, without invalidating the analysis. For example, the abstract theory of algorithms is normally posed in the form of concrete problems, and procedures or programs which solve them. But all the results of that theory can be expressed in a non-intentional language such as you might use to describe purely physical, and quite "non-computational", properties. <br /><br />I really need to provide an example of what I'm talking about. So, consider <a href="http://en.wikipedia.org/wiki/Perceptron" rel="nofollow">the perceptron</a>. This is normally described as a type of "circuit" or "neural network", which was long ago proven incapable of performing certain "classifications". Those terms come already loaded with connotations which make them something more than "natural kinds" - there's already a bit of ready-to-hand-ness about them, an imputation of function. And if one then considers the more abstract notion of a perceptron as a type of algorithm or virtual machine, it may seem that the (usually un-remarked-upon) constructedness of the concept is even deeper and more ramified than it is when the perceptron is supposed to be a concrete device. However, all the facts - the theorems - about what perceptrons can and cannot do, can be understood in a way which is denuded of both artefactuality (that is, the presupposition of perceptron as artefact) and intentionality (that is, the ascription of any representational or other mentalistic property to the perceptron). Those theorems are facts about the possible behaviors of a physical object with a certain causal structure, valid regardless of whether that object is a neuronal pathway which develops according to gene-environment interactions which are entirely evolved rather than designed, or whether that object is a manufactured circuit, or even a "computationally universal" emulator which has been tuned to behave like a specialized circuit. <br /><br />What I've provided here is not an argument for historically imminent superintelligence, more a prelude to such an argument, intended to explain why certain objections don't count. Gerald Edelman's distinction between selectionist and instructionist systems, for example, has some ontological significance, but it doesn't mean much at this para-computational level that I have tried to describe, and that is the level which matters when it comes to the pragmatic capabilities of would-be thinking systems. If you could show that a selectionist system can do something which instructionist ones can't, or that it can do them on significantly different timescales (such as the polynomial vs exponential time distinction beloved of computer scientists), that would matter in the way that the perceptron theorems "matter". But the main difference between selectionist and instructionist systems seems to be that the former are evolved and the latter are designed - and this matters ontologically, but not pragmatically, if pragmatics includes such considerations as whether an instructionist system could become an autonomous agent able to successfully resist human attempts to put it back in its box.Mitchellhttps://www.blogger.com/profile/10768655514143252049noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-83699540879148706132010-10-28T00:22:43.594-07:002010-10-28T00:22:43.594-07:00A busy week has given me a chance to think about w...A busy week has given me a chance to think about what, if anything, to add to this discussion. I end up first wanting to explain what this "mathematical" perspective is, and how it relates to brains and to computers. To a large extent it just means employing a physical description rather than some other sort of description, though perhaps one at such an abstract level that we just talk about "states" with little regard for their physical composition. <br /><br />Focusing on a material description has different consequences for brains and computers. For a brain, it means adopting a natural-scientific language of description, mostly that of biology, and it also means you say nothing about the mind or anything mindlike. You know it's in there, somehow, but it doesn't feature in what you say. For a computer, it means stripping away the imputational language of role and function which normally pervades the discourse about computers, and returning it to its pure physicality. A silicon chip, from this perspective, doesn't contains ones and zeroes, or any other form of representational content; it's just a sculpted crystal in which little electrical currents flow. <br /><br />The asymmetry arises because we know that consciousness, intelligence, personality and so forth really do have some relationship to the brain, even though, from a perspective of physical causality, it seems like these all ought to be dispensible concepts. How matter and mind relate is simply an open problem, scientifically and philosophically (a problem for which there are many proposed solutions), and this is one way to bring out the problem. For a computer, however, all we know is that the imputation of such attributes (intelligence, intentionality, etc) is a big part of how humans relate to these machines, and we even know that these machines have been designed/evolved in order to facilitate such imputation (which goes on whenever anyone employs a programming language). But we have no evidence that anything mindlike is actually there in any computing machine yet made, and most informed people seem to think it's never yet been there, though in principle this depends on one's particular theory about the mind-matter relationship. <br /><br />To sum up, the asymmetry is that for brains, adoption of the strictly physical perspective brings out or highlights a mystery and a genuine unsolved problem, whereas for computers, adoption of the strictly physical perspective simply reminds us of the extent to which the human user is the one who personalizes or mentalizes the computer and its activities. <br /><br />Given this context, my thesis about computation and intelligence is as follows. Regardless of where lies the boundary between "complex structured object actually possessing mentality" and "complex structured object with no actual mind, but to which mindlike traits are sometimes attributed"... the "mathematical" understanding of (i) complex systems, (ii) the powers open to a system with a particular dynamics, and (iii) how to induce a desired dynamics in a sufficiently flexible class of complex system, do all imply the artificial realizability of something functionally equivalent to intelligence, and even "superintelligence", quite independently of whether this "artificial intelligence" has all the ontological traits possessed by the real thing.Mitchellhttps://www.blogger.com/profile/10768655514143252049noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-69387704221486304272010-10-24T18:13:04.855-07:002010-10-24T18:13:04.855-07:00Luke: at what point was I arguing about cryonics?Luke: at what point was I arguing about cryonics?adminhttps://www.blogger.com/profile/01020701980607126113noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-36147757129503380312010-10-24T17:55:57.251-07:002010-10-24T17:55:57.251-07:00> Upon creating such a differently-intelligent ...> Upon creating such a differently-intelligent being, . . .<br />> we might attribute to such a one rights (although we seem<br />> woefully incapable of doing so even for differently materialized<br />> intelligences that are nonetheless our palpable biological kin --<br />> for instance, the great apes, cetaceans).<br /><br />Or even Poofters!<br /><br />http://www.towleroad.com/2007/11/gay-man-battles.htmljimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-42393276808978263522010-10-24T17:43:14.271-07:002010-10-24T17:43:14.271-07:00> If it takes longer than ten years to be able ...> If it takes longer than ten years to be able to reanimate cryopatients. . .<br /><br />Curious how Martin Striz's comment about computer simulation of biological<br />systems somehow morphed into a comment about the plausibility of<br />cryonics. Or perhaps not so surprising, since it seems that<br />the Three Pillars of the Transhumanist Creed these days seem to<br />be: (1) superhuman AI, (2) nanotechnology and (3) physical immortality.<br />Either (1) begets (2), or (2) begets (1), and (1) and (2) beget (3).<br /><br />Goes the other way, too -- Melody Maxim recently complained on her<br />blog that people who are ostensibly interested in serious discussions<br />about cryonics seem to be prone to going off on tangents about<br />uploading.<br /><br />Saturday, October 2, 2010<br />Cryonics and Uploading<br />http://cryomedical.blogspot.com/2010/10/cryonics-and-uploading.htmljimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-57672008046148004752010-10-24T15:45:04.231-07:002010-10-24T15:45:04.231-07:00Luke, you are going to die.Luke, you are going to die.Dale Carricohttps://www.blogger.com/profile/02811055279887722298noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-29796965300425031482010-10-24T14:43:09.281-07:002010-10-24T14:43:09.281-07:00Martin: If it takes longer than ten years to be ab...Martin: If it takes longer than ten years to be able to reanimate cryopatients, that isn't a strong argument against cryonics.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5956838.post-59038611546639979042010-10-24T09:52:49.397-07:002010-10-24T09:52:49.397-07:00> I was reading around page 200, thinking "...> I was reading around page 200, thinking "This argument<br />> doesn't work because the human mind doesn't work that way; it works<br />> like *this*." Then I got to page 264, and there was an excellent<br />> description of *this*.<br /><br />". . . This is an image of my mind [said the Nothing Machine]. . ."<br /><br />It was not shaped like any Sophotech architecture Phaethon<br />had ever seen. There was no center to it, no fixed logic,<br />no foundational values. Everything was in motion, like a<br />whirlpool. . .<br /><br />The schematic of the Nothing thought system looked like the<br />vortex of a whirlpool. At the center, where, in Sophotechs,<br />the base concepts and the formal rules of logic and basic<br />system operations went, was a void. How did the machine<br />operate without basic concepts?<br /><br />There was continual information flow in the spiral arms<br />that radiated out from the central void, and centripetal<br />motion that kept the thought-chains generally all pointed<br />in the same direction. But each arm of that spiral,<br />each separate thought-action initiated by the spinning web,<br />each separate strand, had its own private embedded<br />hierarchy, its own private goals. The energy was distributed<br />throughout the thought-webwork by success feedback: each<br />parallel line of thought judged its neighbors according<br />to its own value system, and swapped data-groups and<br />priority-time according to their own private needs.<br />Hence, each separate line of thought was led, as if by<br />an invisible hand, to accomplish the overall goals of<br />the whole system. And yet those goals were not written<br />anywhere within the system itself. They were implied,<br />but not stated, in the system's architecture, written<br />in the medium, not the message.<br /><br />It was a maelstrom of thought without a core, without a<br />heart. . . Phaethon could see many blind spots, many<br />sections of which the Nothing Machine was not consciously<br />aware. In fact, wherever two lines of thought in the<br />web did not agree, or diverged, a little sliver of darkness<br />appeared, since such places lost priority. But wherever<br />thoughts agreed, wherever they helped each other,<br />or cooperated, additional webs were born, energy was<br />exchanged, priority time was accelerated, light grew.<br />The Nothing Machine was crucially aware of any area where<br />many lines of thought ran together.<br /><br />Phaethon could not believe what he was seeing. It was<br />like consciousness without thought, lifeless life, a<br />furiously active superintelligence with no core. . ."<br /><br />-- John C. Wright,<br />_The Golden Transcendence_<br /><br />-------------------------------------<br /><br />In Edelman's earlier books, the momentary state of the<br />thalamocortical system of the brain of an organism exhibiting<br />primary consciousness. . . was spoken of as constantly morphing<br />into its successor in a probabilistic trajectory influenced<br />both by the continued bombardment of new exteroceptive input<br />(actively sampled through constant movement)<br />and by the organism's past history (as reflected by the strengths<br />of all the synaptic connections within and among the groups of<br />the primary repertoire). [This] evolving state. . .<br />is given a new characterization in Edelman's [later books as]<br />the "dynamic core hypothesis" (UoC Chap. 12). . .<br /><br />Edelman and Tononi give [a] visual metaphor for the<br />dynamic core hypothesis in UoC on p. 145 (Fig. 12.1):<br />an astronomical photo of M83, a spiral galaxy in<br />Hydra, with the caption "No visual metaphor can capture the<br />properties of the dynamic core, and a galaxy with complicated,<br />fuzzy borders may be as good or as bad as any other".jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-2939289288724578802010-10-24T08:30:47.880-07:002010-10-24T08:30:47.880-07:00I went Googling for Usenet and other Web commentar...I went Googling for Usenet and other Web commentary<br />on Wright's _Golden Age_ trilogy, and found some entertaining<br />remarks. Here's one:<br /><br />http://groups-beta.google.com/group/rec.arts.sf.written/msg/ecc9d27621264db0<br />------------------<br />Being an Objectivist may not define everything about Wright<br />as a writer, but it is the entirety of the ending to this trilogy. <br /><br />After two and a half books of crazy-ass post-human hijinks, Wright<br />declares that the Final Conflict will be between the rational<br />thought-process of the Good Guys and the insane thought-process of the<br />Bad Guys. He lays out the terms. He gives the classic, unvarnished<br />Objectivist argument in the protagonist's voice. He does a good job of<br />marshalling the usual objections to Objectivism (including mine) in<br />the protagonist's skeptical allies. He does a great job of describing<br />how *I* think the sentient mind works, and imputes it to the evil<br />overlord. <br /><br />(Really. I was reading around page 200, thinking "This argument<br />doesn't work because the human mind doesn't work that way; it works<br />like *this*." Then I got to page 264, and there was an excellent<br />description of *this*.)<br /><br />Then Wright declares that his side wins the argument, and that's the<br />end of the story. (The evil overlord was merely insane, and is<br />cured/convinced by Objectivism.) This is exactly as convincing as<br />every other Objectivist argument I've seen, which is to say "utterly<br />unsupported", and it quite left me feeling cheated for an ending.<br /><br />If that's not writing as defined by a particular moral philosophy,<br />what is? . . .jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-54435675526871886022010-10-24T08:28:06.282-07:002010-10-24T08:28:06.282-07:00Humans were able to apply their thinking inconsist...Humans were able to apply their thinking inconsistently,<br />having one standard, for example, related to scientific<br />theories, and another for political theories: one standard<br />for himself, and another for the rest of the world.<br /><br />But since Sophotech concepts were built up of innumerable<br />logical particulars, and understood in the fashion called<br />entire, no illogic or inconsistency was possible within<br />their architecture of thought . Unlike a human, a<br />Sophotech could not ignore a minor error in thinking<br />and attend to it later; Sophotechs could not prioritize<br />thought into important and unimportant divisions;<br />they could not make themselves unaware of the implications<br />of their thoughts, or ignore the context, true meaning, and<br />consequences of their actions.<br /><br />The secret of Sophotech thinking-speed was that they<br />could apprehend an entire body of complex thought,<br />backward and forward, at once. The cost of that speed<br />was that if there were an error or ambiguity anywhere<br />in that body of thought, anywhere from the most definite<br />particular to the most abstract general concept, the<br />whole body of thought was stopped, and no conclusions<br />reached. . .<br /><br />Sophotechs cannot form self-contradictory concepts, nor<br />can they tolerate the smallest conceptual flaw anywhere<br />in their system. Since they are entirely self-aware<br />they are also entirely self-correcting. . .<br /><br />Sophotechs, pure consciousness, lack any unconscious<br />segment of mind. They regard their self-concept with the<br />same objective rigor as all other concepts. The moment we conclude<br />that our self-concept is irrational, it cannot proceed. . .<br /><br />Machine intelligences had no survival instinct to override<br />their judgment, no ability to formulate rationalizations,<br />or to concoct other mental tricks to obscure the true<br />causes and conclusions of their cognition from themselves. . .<br /><br />Sophotech existence (it could be called life only by<br />analogy) was a continuous, deliberate, willful, and<br />rational effort. . .<br /><br />For an unintelligent mind, a childish mind. . . their beliefs<br />in one field, or on one topic, could change without<br />affecting other beliefs. But for a mind of high intelligence,<br />a mind able to integrate vast knowledge into a single<br />unified system of thought, Phaethon did not see how<br />one part could be affected without affecting the whole.<br />This was what the Earthmind meant by 'global'. . . .<br /><br />[B]y saying 'Reality admits of no contradictions' . . .<br />[s]he was asserting that there could not be a model<br />of the universe that was true in some places, false<br />in others, and yet which was entirely integrated and<br />self-consistent. Self-consistent models either had<br />to be entirely true, entirely false, or incomplete."<br /><br />_The Golden Transcendence_, pp. 140 - 146jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-7359160672561257552010-10-24T08:27:36.392-07:002010-10-24T08:27:36.392-07:00There is a (minor) SF author named John C. Wright....There is a (minor) SF author named John C. Wright.<br />He wrote a transhumanist SF trilogy (overall title:<br />"The Golden Age") comprising the volumes<br />_The Golden Age_<br />http://www.amazon.com/Golden-Age-Book/dp/0812579844<br />_The Phoenix Exultant_<br />http://www.amazon.com/Phoenix-Exultant-Golden-Age/dp/0765343541<br />_The Golden Transcendence_<br />http://www.amazon.com/Golden-Transcendence-Last-Masquerade-Age/dp/B000C4SSFI<br /><br />The books were received rapturously in >Hist circles, and<br />the author himself was warmly welcomed one of the prominent<br />mailing lists, until his conversion (from Objectivism) to<br />Christianity (with all that entails) made him persona non<br />grata.<br /><br />However, Wright's science-fictional AIs (known in the books as<br />"sophotechs") captures the flavor of the kind of AI still<br />dreamed of by the preponderance of >Hists.<br /><br />Compare this description to the views of Gerald M. Edelman,<br />summarized above.<br /><br />-------------------------------<br /><br />Sophotechs are digital and entire intelligences. Sophotech<br />thought-speeds can only be achieved by an architecture<br />which allows for instantaneous and nonlinear concept<br />formation. . . Digital thinking meant that there was a<br />one-to-one correspondence between any idea and the<br />objects that idea was supposed to represent. All humans. . .<br />thought by analogy. In more logical thinkers, the<br />analogies were less ambiguous, but in all human thinkers,<br />the emotions and the concepts their minds used were<br />generalizations, abstractions that ignored particulars.<br /><br />Analogies were false to facts, comparative matters of<br />judgment. The literal and digital thinking of the<br />Sophotechs, on the other hand, were matters of logic. . .jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.com