tag:blogger.com,1999:blog-5956838.post692921966141446335..comments2023-11-22T01:14:54.298-08:00Comments on amor mundi: Richard Jones Critiques SuperlativityDale Carricohttp://www.blogger.com/profile/02811055279887722298noreply@blogger.comBlogger10125tag:blogger.com,1999:blog-5956838.post-84432485420600923982007-10-26T09:29:00.000-07:002007-10-26T09:29:00.000-07:00"Utilitarian" wrote:> GOFAI/AI based on more forma..."Utilitarian" wrote:<BR/><BR/>> GOFAI/AI based on more formal algorithms. (I'm not as convinced as<BR/>> you appear to be that this area, defined broadly, won't produce<BR/>> results. I would say that improvements in pattern recognition<BR/>> and statistical algorithms (in search, translation, biometrics)<BR/>> have been quite significant. . .<BR/><BR/>Maybe even more significant than we think!<BR/><BR/>An entertaining thread on /. from a couple of years ago:<BR/><BR/>> [C]ompany GTX Global. . . claim[s] they've developed the<BR/>> first 'true' AI.<BR/><BR/>http://developers.slashdot.org/article.pl?sid=05/12/03/065211<BR/><BR/>Hey, wasn't "GTX" the name of the media/telecommunications<BR/>conglomerate in James Tiptree, Jr.'s story "The Girl Who<BR/>Was Plugged In"? (Forerunner of William Gibson's Sense/Net<BR/>and Tally Isham, the girl with the Zeiss-Ikon eyes.)<BR/><BR/>From the thread:<BR/><BR/>> Interesting to see how the guy went from selling satellite TV<BR/>> equipment to having the best AI ever. This is a truly amazing<BR/>> trajectory. . .jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-68813375467819472932007-10-25T12:31:00.000-07:002007-10-25T12:31:00.000-07:00Anne Corwin wrote:> I guess I just see this kind o...Anne Corwin wrote:<BR/><BR/>> I guess I just see this kind of media piece (the BBC thing)<BR/>> as a "future cultural artifact" moreso than anything else.<BR/><BR/>Yes, although fiction seems to hold its "cultural artifactual"<BR/>value longer than non-fiction.<BR/><BR/>From my childhood, _The Outer Limits_ is still eminently watchable<BR/>That was more psychological/Gothic horror than SF, but "The Sixth<BR/>Finger" is right-on-the-money transhumanistically (not surprising, since<BR/>it was a rip-off of Shaw's _Back to Methuselah_ -- not that I hold<BR/>that against it). David McCallum's portrayal of the >H (**not** >Hist ;-> )<BR/>Gwyllm Griffiths is a fantastic piece of acting. The narcissism (so irrational<BR/>that it's almost an embarrassing plot hole, for somebody who's supposed to be so smart,<BR/>but we can forgive it in the transcendental light of the finale) is<BR/>there, too -- "Life should go forward, see, not backward. But how can<BR/>a man go forward here? -- it's the most backward place in the world!"<BR/>"You'll go forward, Gwyllm; you're smarter than the others." "Well I'm<BR/>too smart to go on eating coal dust for the rest of my life. All I<BR/>need is a chance to use my brain, and I'd show 'em. I'd be<BR/>drivin' 'round in a sports car with a big gold ring on my finger."<BR/><BR/>_Star Trek_ is still eminently watchable, with glosses on >Hist themes<BR/>(in "Where No Man Has Gone Before", "Errand of Mercy", "What Are<BR/>Little Girls Made Of?" and "The Return of the Archons", among<BR/>other episodes) deserving of more credit than the contemporary<BR/>>Hists have ever given them.<BR/><BR/>And these were both mainstream network (ABC and NBC, respectively)<BR/>TV shows, for cryin' out loud!<BR/><BR/>Even a cartoon like _The Jetsons_ retains its entertainment value<BR/>(all the more so in that flying cars that fold up into briefcases<BR/>and apartment buildings on stalks that can be elevated above<BR/>the rainclouds at the touch of a button have yet to materialize).<BR/><BR/>There was a non-fiction show that came on Sunday nights (IIRC)<BR/>called "The 21st Century" that I used to watch religiously.<BR/>Narrated by Walter Cronkite, of all people. The only thing I<BR/>remember about that show now is the opening title-sequence that<BR/>showed a counter running from the current year (1967, or whatever it was)<BR/>up through the 70's, 80's, and 90's, and finally rolling up<BR/>2000 and 2001, where it stopped. (My God, all those years have<BR/>been lived through, now.)<BR/>http://www.retrofuture.com/spaceage.html<BR/><BR/>Syd Mead's artwork (which I remember well from 1960's car magazines<BR/>like _Motor Trend_) is still **fabulous**.<BR/>http://www.scrubbles.net/sydmead.html<BR/><BR/>An exception I'll grant to the ephemerality of non-fiction is<BR/>Arthur C. Clarke's _Profiles of the Future_, which is still<BR/>eminently readable even though it's starting to be disappointingly<BR/>off-target.jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-6342887143722715752007-10-25T10:46:00.000-07:002007-10-25T10:46:00.000-07:00"Utilitarian" wrote:> Could you allocate your >His..."Utilitarian" wrote:<BR/><BR/>> Could you allocate your >Hist distaste among the following<BR/>> in relation to AI?<BR/><BR/>All right, I'll make a stab at this. All these points are,<BR/>however, as Dale would say, "inter-implicated", as I've<BR/>come to realize.<BR/><BR/>> 1. GOFAI/AI based on more formal algorithms.<BR/><BR/>Call it 10%.<BR/><BR/>> I'm not as convinced as you appear to be that this area,<BR/>> defined broadly, won't produce results. I would say that<BR/>> improvements in pattern recognition and statistical algorithms<BR/>> (in search, translation, biometrics) have been quite significant. . .<BR/><BR/>So that maybe a build-up of the tools of "weak AI" will<BR/>coalesce into a capability for "strong AI". I'm not sanguine.<BR/><BR/>> . . .even though the past failures of GOFAI should substantially<BR/>> lower our estimates of its success.)<BR/><BR/>Indeed.<BR/><BR/>There is another view of the whole question of intelligence which is,<BR/>rather oddly, simply not bruited about in >Hist circles.<BR/>There are plausible reasons for this. One is that it goes against<BR/>both the philosophical (Aristotelian, or crude Ayn Randian)<BR/>and political (what George Lakoff calls politics based on<BR/>"strict-father" morality) prejudices of the >Hist community.<BR/>Another is that it goes against the personal prejudices of some<BR/>of the most vocal of the >Hists (e.g., in that it simply wouldn't<BR/>do if we're going to **guarantee** "Friendliness").<BR/><BR/>I'm thinking of intelligence as a "selectional" rather than an<BR/>"instructional" process.<BR/><BR/>As evolutionary epistemologist Henry Plotkin puts it:<BR/><BR/>"[W]hy should the brain be seen as a Darwinian<BR/>kind of machine rather than as a Lamarckian<BR/>machine?... Forced to take sides,... there<BR/>are two... reasons for choosing the selectionist<BR/>camp. One is the problem of creativity...<BR/>Intelligence... involves... the production of<BR/>novel solutions to the problems posed by<BR/>change -- solutions that are not directly<BR/>given in the experienced world... Such<BR/>creativity cannot occur if change is slavishly<BR/>tracked by instructionalist devices. So<BR/>what we see here is that while selection<BR/>can mimic instruction, the reverse is never<BR/>true... Instructional intelligence comprises<BR/>only what has been actually experienced...<BR/>Indeed, according to D. T. Campbell, the father<BR/>of modern evolutionary epistemology, selectional<BR/>processes are required for the acquisition of<BR/>any truly new knowledge about the world:<BR/>'In going beyond what is already known, one<BR/>cannot but go blindly. If one goes wisely,<BR/>this indicates already achieved wisdom of<BR/>some general sort.' Instruction is never<BR/>blind. Selection always has an element...<BR/>of blindness in it. At the heart of all<BR/>creative intelligence is a selectional<BR/>process, no matter how many instructional<BR/>processes are built on top of it.<BR/><BR/>The [other] reason for choosing selection<BR/>over instruction is one of parsimony and<BR/>simplicity. If the primary heuristic<BR/>[i.e., phylogenetic evolution]<BR/>works by selectional processes, which it<BR/>most certainly does,... and if that other<BR/>embodiment of the secondary heuristic<BR/>that deals with our uncertain chemical<BR/>futures, namely the immune system, works<BR/>by selectional processes, which is now<BR/>universally agreed, then why should one be<BR/>so perverse as to back a different horse<BR/>when it comes to intelligence?<BR/><BR/>A nested hierarchy of selectional processes is<BR/>a simple and elegant conception of the nature<BR/>of knowledge. There will have to be good<BR/>empirical reasons for abandoning it."<BR/><BR/>-- _Darwin Machines and the Nature of Knowledge_,<BR/>Chapter 5, "The Evolution of Intelligence",<BR/>p. 171<BR/><BR/>Or Gerald M. Edelman:<BR/><BR/>"Clearly, if the brain evolved in such a fashion, and<BR/>this evolution provided the biological basis for the eventual<BR/>discovery and refinement of logical systems in human cultures,<BR/>then we may conclude that, in the generative sense, selection is<BR/>more powerful than logic. It is selection -- natural and somatic<BR/>-- that gave rise to language and to metaphor, and it is<BR/>selection, not logic, that underlies pattern recognition and<BR/>thinking in metaphorical terms. Thought is thus ultimately based<BR/>on our bodily interactions and structure, and its powers are<BR/>therefore limited in some degree. Our capacity for pattern<BR/>recognition may nevertheless exceed the power to prove<BR/>propositions by logical means... This realization does not, of<BR/>course, imply that selection can take the place of logic, nor<BR/>does it deny the enormous power of logical operations. In the<BR/>realm of either organisms or of the synthetic artifacts that we<BR/>may someday build, we conjecture that there are only two<BR/>fundamental kinds -- Turing machines and selectional systems.<BR/>Inasmuch as the latter preceded the emergence of the former in<BR/>evolution, we conclude that selection is biologically the more<BR/>fundamental process. In any case, the interesting conjecture is<BR/>that there appear to be only two deeply fundamental ways of<BR/>patterning thought: selectionism and logic. It would be a<BR/>momentous occasion in the history of philosophy if a third way<BR/>were found or demonstrated"<BR/><BR/>-- _A Universe of Consciousness_, p. 214<BR/><BR/>Or Jean-Pierre Changeux:<BR/><BR/>"If the hypotheses put forward [in this book] are correct, <BR/>the formation of. . . representations, although using <BR/>different elements and different levels of organization, obeys <BR/>a common rule, inspired by Darwin's original hypothesis. A <BR/>process of selective stabilization takes over from diversification <BR/>by variation. The mechanisms associated with evolution of the <BR/>genome[,]... [c]hromosomal reorganization, duplication of genes, <BR/>recombinations and mutations, all create genetic diversity, but <BR/>only a few of the multiple combinations that appear in each <BR/>generation are maintained in natural populations. During <BR/>postnatal epigenesis, the "transient redundancy" of cells <BR/>and connections and the way in which they grow produce a <BR/>diversity not restricted to one dimension like the genome, <BR/>but existing in the three dimensions of space. Here again, <BR/>only a few of the geometric configurations that appear during <BR/>development are stabilized in the adult... Does such a <BR/>model apply for the more "creative" aspects of our thought <BR/>processes? Is it also valid for the acquisition of knowledge? <BR/><BR/>... <BR/><BR/>It is... worth noting that in the history of ideas "directive" <BR/>hypotheses have most often preceded selective hypotheses. <BR/>When Jean-Baptiste de Lamarck tried to found his theory of <BR/>"descendance" on a plausible biological mechanism, he proposed <BR/>the "heredity of acquired characteristics", a tenet that <BR/>advances in genetics would eventually destroy. One had to <BR/>wait almost half a century before the idea of selection was <BR/>proposed by Charles Darwin and Alfred Wallace and validated <BR/>in principle, if not in all the details of its application. <BR/>In the same way the first theories about the production of <BR/>antibodies were originally based on directive models before <BR/>selective mechanisms replaced them. It could conceivably be <BR/>the same for theories of learning."<BR/><BR/>-- _Neuronal Man_, Chapter 9, "The Brain -- Representation of the <BR/>World"<BR/><BR/>Not **all** >Hists are unsympathetic to these ideas. I've<BR/>mentioned Eugen Leitl. Another example is John Smart:<BR/><BR/>http://www.accelerationwatch.com/specu.html<BR/>"Emergent AI: Stable, Moral, and Interdependent vs.<BR/>Unpredictable, Post-Moral, or Isolationist? . . .<BR/><BR/>Are complex systems naturally convergent,<BR/>self-stabilizing and symbiotic as a function of<BR/>their computational depth? Is the self-organizing<BR/>emergence of 'friendliness' or 'robustness to<BR/>catastrophe' as inevitable as 'intelligence,'<BR/>when considered on a universal scale?"<BR/><BR/>(Smart clearly thinks the answer is "yes").<BR/>He goes on to comment:<BR/><BR/>"I tend to disagree with many assumptions of Yudkowsky['s<BR/>'Friendly AI',] but his is a good example of top-down models which<BR/>express a 'conditional confidence' in future friendliness.<BR/>I share his conclusion but without invoking a 'consciousness<BR/>centralizing' world view, which assumes that human-imposed<BR/>conditions will continue to play a central role in the<BR/>self-balancing, integrative, and information-protecting<BR/>processes that are emerging within complex adaptive<BR/>technological systems. While it is true that consciousness<BR/>and human rationality play central roles in the self-organizing<BR/>of the collective human complex adaptive system<BR/>(human civilization, species consciousness), and that<BR/>these processes often control the perceptions and models<BR/>we build of the universe (ie, the quality of our individual<BR/>and collective simulations) such systems do not appear<BR/>to control the evolutionary development of the universe<BR/>itself, and are thus peripheral to the self-organization<BR/>of all other substrates, be they molecular, genetic,<BR/>neural, or most importantly in this case, technologic.<BR/><BR/>It is deceptively easy to assume that because humans<BR/>are catalysts in the production of technology to increase<BR/>our local understanding of the universe, that we ultimately<BR/>'control' that technology, and that it develops at a<BR/>rate and in a manner dependent on our conscious understanding<BR/>of it. Such may approximate the actual case in the initial<BR/>stages, but all complex adaptive systems rapidly develop<BR/>local centers of control, and technology is proving to be<BR/>millions of times better at such 'environmental learning'<BR/>than the biology that it is co-evolving with. It can be<BR/>demonstrated that all evolutionary developmental substrates<BR/>take care of these issues on their own, from within.<BR/>Technological evolutionary development is rapidly engaged<BR/>in the process of encoding, learning, and self-organizing<BR/>environmental simulations in its own contingent fashion,<BR/>and with a degree of M[atter]E[nergy]S[pace]T[ime -- a most<BR/>unfortunate Scientological choice of terminology]<BR/>compression at least ten million times faster than human<BR/>memetic evolutionary development. Thus humans are both<BR/>partially-cognizant spectators and willing catalysts in<BR/>this process. This appears to be the hidden story of<BR/>emergent A.I.."<BR/><BR/>> ["Utilitarian" continued:]<BR/>><BR/>> 2. Grandiose claims of personal programming or problem-solving<BR/>> ability. (These are to be discounted.)<BR/>><BR/>> 3. Cultish psychological/sociological characteristics. (We've<BR/>> discussed this.)<BR/><BR/>These are inseparable for me, and together I'd count them at 70%.<BR/><BR/>A Web commentator wrote:<BR/><BR/>http://www.blog.speculist.com/archives/2006_07.html<BR/>--------------------------------------------------<BR/>Hired Help<BR/><BR/>Michael Anissimov writes that achieving Friendly AI is a<BR/>serious proposition -- so serious, in fact, that we might<BR/>ought to go ahead and pay somebody to do it. <BR/><BR/>It's really not that radical a proposition. You want a<BR/>radical proposition? How about this, written by the<BR/>"someone" whom Michael has in mind to hire to solve the<BR/>friendly AI problem (as quoted elsewhere on Accelerating Future):<BR/><BR/>"There is no evil I have to accept because 'there’s nothing<BR/>I can do about it'. There is no abused child, no oppressed peasant,<BR/>no starving beggar, no crack-addicted infant, no cancer patient,<BR/>literally no one that I cannot look squarely in the eye.<BR/>I’m working to save everybody, heal the planet, solve all the<BR/>problems of the world."<BR/><BR/>If it was anybody else saying it, it would sound kind of,<BR/>well, crazy.<BR/>--------------------------------------------------<BR/><BR/>Yeah, kind of. (Anybody **else**?!) :-0<BR/><BR/>Some people have very little defense against this kind of<BR/>"guru whammy", and other folks are all too willing to<BR/>exploit it for their own ends.<BR/><BR/>I found a rather provocative characterization of another<BR/>putatively historical figure on the Web recently:<BR/><BR/>"Jesus Christ, narcissist"<BR/>by Sam Vaknin<BR/>http://health.groups.yahoo.com/group/narcissisticabuse/message/5148<BR/><BR/>> ["Utilitarian" continued:]<BR/>><BR/>> 4. Claims of strong ethical implications flowing from limited influence<BR/>> over AI development.<BR/><BR/>You mean that if we can't control the outcome of the development<BR/>of >H intelligence (in the form of AI), then maybe it's unethical<BR/>to do it at all?<BR/><BR/>I dunno, it sometimes seems to me that **some** >Hists are eager to<BR/>instantiate Hugo de Garis' "artilect war" before there's even as<BR/>good a reason as de Garis seems to think there would have to be before it<BR/>would happen. How ethical is that?<BR/><BR/>I'm suspicious of claims of "superior" ethicality. It's part of<BR/>the guru-whammy, for one thing. It's a rhetorical ploy to cut off<BR/>criticism.<BR/><BR/>Also, I think that ethical discussions among >Hists, like discussions<BR/>of intelligence, tend to over-rely on formal deontological systems.<BR/><BR/>I prefer Bertrand Russell's characterization:<BR/><BR/>WOODROW WYATT: Well now, if you don't believe in religion,<BR/>and you don't; and if you don't, on the whole,<BR/>think much of the assorted rules thrown up by<BR/>taboo morality, do you believe in any system of ethics?<BR/><BR/>BERTRAND RUSSELL: Yes, but it's very difficult to separate<BR/>ethics altogether from politics. Ethics, it seems<BR/>to me, arises in this way: a man is inclined to do<BR/>something which benefits him and harms his neighbor.<BR/>Well, if it harms a good many of his neighbors, they<BR/>will combine together and say, "Look, we don't like<BR/>this sort of thing; we will see to it that it<BR/>**doesn't** benefit the man." And that leads<BR/>to the criminal law. Which is perfectly rational:<BR/>it's a method of harmonizing the general and private<BR/>interest.<BR/><BR/>WYATT: But now, isn't it, though, rather inconvenient<BR/>if everybody goes about with his own kind of private<BR/>system of ethics, instead of accepting a general one?<BR/><BR/>RUSSELL: It would be, if that were so, but in fact<BR/>they're not so private as all that because, as I was<BR/>saying a moment ago, they get embodied in the criminal<BR/>law and, apart from the criminal law, in public<BR/>approval and disapproval. People don't like to<BR/>incur public disapproval, and in that way, the<BR/>accepted code of morality becomes a very potent<BR/>thing.<BR/><BR/>-- LP "Bertrand Russell Speaking" (1959)<BR/> (Woodrow Wyatt Interviews)<BR/><BR/>Or Antonio R. Damasio:<BR/><BR/>"The essence of ethical behavior does not begin with<BR/>humans. Evidence from birds (such as ravens)<BR/>and mammals (such as vampire bats, wolves, baboons,<BR/>and chimpanzees) indicates that other species<BR/>can behave in what appears, to our sophisticated<BR/>eyes, as an ethical manner. They exhibit sympathy,<BR/>attachments, embarrassment, dominant pride,<BR/>and humble submission. They can censure and<BR/>recompense certain actions of others. Vampire<BR/>bats, for example, can detect cheaters among<BR/>the food gatherers in their group and punish<BR/>them accordingly. Ravens can do likewise. Such<BR/>examples are especially convincing among primates,<BR/>and are by no means confined to our nearest<BR/>cousins, the big apes. Rhesus monkeys can<BR/>behave in a seemingly altruistic manner toward<BR/>other monkeys. In an intriguing experiment<BR/>conducted by Robert Miller and discussed by<BR/>Marc Hauser, monkeys abstained from pulling a<BR/>chain that would deliver food to them if pulling<BR/>the chain also caused another monkey to receive<BR/>an electric shock. Some monkeys would not<BR/>eat for hours, even days. Suggestively, the<BR/>animals most likely to behave in an altruistic<BR/>manner were those that knew the potential target<BR/>of the shock. Here was compassion working better<BR/>with those who are familiar than with strangers.<BR/>The animals that previously had been shocked<BR/>also were more likely to behave altruistically.<BR/>Nonhumans can certainly cooperate or fail to do<BR/>so, within their group. This may displease<BR/>those who believe just behavior is an exclusively<BR/>human trait. As if it were not enough to be<BR/>told by Copernicus that we are not in the center<BR/>of the universe, by Charles Darwin that we have<BR/>humble origins, and by Sigmund Freud that we<BR/>are not full masters of our behavior, we have<BR/>to concede that even in the realm of ethics there<BR/>are forerunners and descent."<BR/><BR/>-- _Looking for Spinoza: Joy, Sorrow, and the Feeling Brain_,<BR/>Chapter 4, "Ever Since Feelings" (pp. 160 - 161)<BR/><BR/>OK, so call this 10%<BR/><BR/>> ["Utilitarian" continued]<BR/>><BR/>> 5. Factors X, Y, Z...<BR/><BR/>Yeah, well there's the politics. Disappointingly right-wing.<BR/><BR/>As Nietzsche realized, once you've rejected 100%<BR/>pure foundationalist epistemology and ethics<BR/>(derived from God, or the universal rules of Logic<BR/>as discovered by Aristotle), then all **guarantees** are<BR/>off. It **doesn't** mean that the world instantly dissolves<BR/>into total chaos, but it **does** mean that things can<BR/>drift, over decades, centuries, or millennia (to say<BR/>nothing of geological ages) enough to make a lot<BR/>of people radically motion-sick. And it does indeed<BR/>mean that a powerful technology for the control of human<BR/>behavior, if it were ever invented, could allow a few<BR/>people to impose their will on the majority.<BR/>"For into the midst of all these policies comes the Ring<BR/>of Power, the foundation of Barad-dur, and the hope of Sauron.<BR/> 'Concerning this thing, my lords, you now all know enough for the<BR/>understanding of our plight, and of Sauron's. If he regains it, your valour<BR/>is vain, and his victory will be swift and complete: so complete that none<BR/>can foresee the end of it while this world lasts.'" C. S. Lewis<BR/>also points out this unpleasant truth in _The Abolition of Man_<BR/>(his defense of foundationalist ethics; unfortunately, IMO, just<BR/>because something admits of unpleasant consequences,<BR/>that in itself is no ground for rejecting it as untrue).<BR/><BR/>And apropos the transhumanists, as Dale once pointed out,<BR/>"Lately, I have begun to suspect that at the temperamental<BR/>core of the strange enthusiasm of many technophiles for<BR/>so-called 'anarcho-capitalist' dreams of re-inventing the<BR/>social order, is not finally so much a craving for liberty<BR/>but for a fantasy, quite to the contrary, of TOTAL EXHAUSTIVE<BR/>CONTROL. This helps account for the fact that negative<BR/>libertarian technophiles seem less interested in discussing<BR/>the proximate problems of nanoscale manufacturing and the<BR/>modest benefits they will likely confer, but prefer to barrel<BR/>ahead to paeans to the 'total control over matter.'<BR/>They salivate over the title of the book From Chance to Choice<BR/>(in fact, a fine and nuanced bioethical accounting of<BR/>benefits and quandaries of genetic medicine), as if<BR/>biotechnology is about to eliminate chance from our live<BR/>and substitute the full determination of morphology --<BR/>when it is much more likely that genetic interventions<BR/>will expand the chances we take along with the<BR/>choices we make. Behind all their talk of efficiency<BR/>and non-violence there lurks this weird micromanagerial<BR/>fantasy of sitting down and actually contracting explicitly<BR/>the terms of every public interaction in the hopes of<BR/>controlling it, getting it right, dictating the details.<BR/>As if the public life of freedom can be compassed<BR/>in a prenuptual agreement. . .<BR/><BR/>But with true freedom one has to accept an ineradicable<BR/>vulnerability and a real measure of uncertainty. We live<BR/>in societies with peers, boys. Give up the dreams of total<BR/>invulnerability, total control, total specification.<BR/>Take a chance, live a little. Fairness is actually<BR/>possible. . ."<BR/><BR/>The "weird micro-managerial fantasy" isn't so weird after<BR/>all, it's a temperamental hankering after old (lost, for<BR/>good, but a lot of smart people aren't ready to acknowledge<BR/>it) religious certainties.<BR/><BR/>"[T]hat we are not inviolate selves but a pandemonium<BR/>or parliament of contesting inner voices, that we are<BR/>constructed not given from eternity, that even universal<BR/>mathematics might be as gapped and fissured as any<BR/>poststructuralist text..., once deeply shocking, has<BR/>become familiar news."<BR/><BR/>-- Damien Broderick, _Transrealist Fiction_, p. 56<BR/><BR/>"Please observe that the whole dilemma revolves pragmatically<BR/>about the notion of the world's possibilities. Intellectually,<BR/>rationalism invokes its absolute principle of unity as a<BR/>ground of possibility for the many facts. Emotionally, it<BR/>sees it as a container and limiter of possibilities, a<BR/>guarantee that the upshot shall be good, Taken in this way,<BR/>the absolute makes all good things certain, and all bad<BR/>things impossible (in the eternal, namely), and may be<BR/>said to transmute the entire category of possibility into<BR/>categories more secure. One sees at this point that<BR/>the great religious difference lies between the men who<BR/>insist that the world **must and shall be**, and those who<BR/>are contented with believing that the world **may be**, saved.<BR/>The whole clash of rationalistic and empiricist religion<BR/>is thus over the validity of possibility. . .<BR/><BR/>In particular **this** query has always come home to me:<BR/>May not the claims of tender-mindedness go too far?<BR/>May not the notion of a world already saved in toto<BR/>anyhow, be too saccharine to stand? May not religious<BR/>optimism be too idyllic? Must **all** be saved? Is **no**<BR/>price to be paid in the work of salvation? Is the last<BR/>word sweet? Is all 'yes, yes' in the universe? Doesn't<BR/>the fact of 'no' stand at the very core of life?<BR/>Doesn't the very 'seriousness' that we attribute to life<BR/>mean that ineluctable noes and losses form a part of it,<BR/>that there are genuine sacrifices somewhere, and that<BR/>something permanently drastic and bitter always<BR/>remains at the bottom of its cup?<BR/><BR/>I can not speak officially as a pragmatist here;<BR/>all I can say is that my own pragmatism offers no<BR/>objection to my taking sides with this more moralistic<BR/>view, and giving up the claim of total reconciliation.<BR/>The possibility of this is involved in the pragmatistic<BR/>willingness to treat pluralism as a serious hypothesis.<BR/>In the end it is our faith and not our logic that<BR/>decides such questions, and I deny the right of any<BR/>pretended logic to veto my own faith. I find myself<BR/>willing to take the universe to be really dangerous<BR/>and adventurous, without therefore backing out and<BR/>crying 'no play.' I am willing to think that the<BR/>prodigal-son attitude, open to us as it is in many<BR/>vicissitudes, is not the right and final attitude<BR/>towards the whole of life. I am willing that there<BR/>should be real losses and real losers, and no total<BR/>preservation of all that is. I can believe in the<BR/>ideal as an ultimate, not as an origin, and as an<BR/>extract, not the whole. When the cup is poured off,<BR/>the dregs are left behind forever, but the possibility<BR/>of what is poured off is sweet enough to accept.<BR/><BR/>-- William James, _Pragmatism_,<BR/>Lecture 8, "Pragmatism and Religion"<BR/><BR/>So call that another 10%jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-36056876347787776552007-10-24T23:58:00.000-07:002007-10-24T23:58:00.000-07:00Huh, that's the BBC thing they interviewed me for ...Huh, that's the BBC thing they interviewed me for back in May -- I'm still not sure why they wanted to talk to me of all people, but I sort of saw the whole project as akin to those "gee whiz, what if <I>this</I> happened?" speculative science shows I loved to watch as a youngster. <BR/><BR/>Those shows captured my imagination. They did <I>not</I> turn me into a True Believer(TM) or convince me of the inevitability of any outcome(s) in particular. In fact, I find that one of the main values of such media is found in the realm of cultural anthropology -- it is always enlightening and entertaining to look back on all the neat (or frightening) stuff that never actually ended up happening according to the speculations presented.<BR/><BR/>This is not to say that superlative critique is not needed -- of course it is, and people do need to be educated as to how they can avoid being seduced by wishful thinking and the "I don't need to think for myself anymore!" laziness that can come about as a result of discovering persons they perceive as Superlatively Smart. I guess I just see this kind of media piece (the BBC thing) as a "future cultural artifact" moreso than anything else. I liked having the opportunity to say some words about longevity research and about how morphological freedom should result in a proliferation (rather than a contraction) of diversity, but mainly I saw it as a sort of "fun" thing. But of course it's plenty OK to have fun with something and offer needed critiques of it at the same time.Anne Corwinhttps://www.blogger.com/profile/04940566603711834053noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-78171979061518830642007-10-24T20:51:00.000-07:002007-10-24T20:51:00.000-07:00"It's the other -- baggage (not all or even primar..."It's the other -- baggage (not all or even primarily content-related)<BR/>of the >Hist community that I find more disturbing."<BR/>Could you allocate your >Hist distaste among the following in relation to AI?<BR/><BR/>1. GOFAI/AI based on more formal algorithms. (I'm not as convinced as you appear to be that this area, defined broadly, won't produce results. I would say that improvements in pattern recognition and statistical algorithms (in search, translation, biometrics) have been quite significant, even though the past failures of GOFAI should substantially lower our estimates of its success.)<BR/>2. Grandiose claims of personal programming or problem-solving ability. (These are to be discounted.)<BR/>3. Cultish psychological/sociological characteristics. (We've discussed this.)<BR/>4. Claims of strong ethical implications flowing from limited influence over AI development. (This, less so.)<BR/>5. Factors X, Y, Z...Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5956838.post-26987573396538140582007-10-24T16:43:00.000-07:002007-10-24T16:43:00.000-07:00"Utilitarian" wrote:> It's also hard to avoid draw..."Utilitarian" wrote:<BR/><BR/>> It's also hard to avoid drawing the parallel between<BR/>> Smalley's conclusion that complex molecular machines were<BR/>> beyond human design ability and his contemporaneous adoption<BR/>> of Christian Intelligent Design Creationism, concluding that<BR/>> the molecular machines of living organisms were too complex<BR/>> for abiogenesis.<BR/><BR/>Oh dear. I didn't know about **that**.<BR/><BR/>It is true that there's been frustratingly little progress<BR/>in clarifying the pathways of abiogenesis since the Miller-Urey<BR/>experiment that was pretty much all the school biology textbooks<BR/>had to say about it back in my day.<BR/><BR/>But I think retreating into Intelligent Design was a bit of an<BR/>overreaction on Smalley's part.<BR/><BR/>> For AI feasibility, what I can glean of the view within the<BR/>> field indicates that near-term development is very unlikely,<BR/>> but. . . we should assign higher probabilities over time.<BR/><BR/>OK, sure. Artificial-**anything** feasibility depends on what<BR/>kinds of artifacts we'll be capable of making. For intelligence<BR/>(understood in some kind of biological-analogical sense; I don't<BR/>really know what the word means otherwise)<BR/>we'll need a physical substrate that fulfills the kinds of<BR/>morphological and functional constraints that neuroscience<BR/>is beginning to suggest. Whether that physical substrate will<BR/>function anything like contemporary digital computers do is<BR/>an open question at this point.<BR/><BR/>> I. . . take into account other factors,<BR/>> like wild card biotech enhancements to intelligence (which I<BR/>> think are generally not considered at all by scientists estimating<BR/>> progress in their fields for the 21st century).<BR/><BR/>Oh, yeah, sure, there'll be wildcards.<BR/><BR/>All this is uncontroversial, in my view.<BR/><BR/>It's the other -- baggage (not all or even primarily content-related)<BR/>of the >Hist community that I find more disturbing.jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-88853426573372837542007-10-24T16:17:00.000-07:002007-10-24T16:17:00.000-07:00> I have already adjusted my understanding of scie...> I have already adjusted my understanding of scientific opinion<BR/>> for [the] silence [in mainstream scientific circles surrounding,<BR/>> presumably, MNT and/or AGI].<BR/><BR/>"And come to a conclusion the opposite of mine, it would seem."<BR/>We'd have to break down various issues. For the feasibility of pursuing nanotechnology research along more Drexler/CRN lines I take the mix of silence and a smattering of criticism as being a fairly strong negative signal about the usefulness of Drexlerian ideas as a design path, although the funding shenanigans related to the NNI probably had some role. (It's also hard to avoid drawing the parallel between Smalley's conclusion that complex molecular machines were beyond human design ability and his contemporaneous adoption of Christian Intelligent Design Creationism, concluding that the molecular machines of living organisms were too complex for abiogenesis.)<BR/><BR/>For AI feasibility, what I can glean of the view within the field indicates that near-term development is very unlikely, but hardware improvements, accumulating software techniques, the allocation of more human capital to the technology industry, improving neuroscience, the likelihood of biological intelligence enhancement, and increasing economic incentives for marginal AI improvements within fields such as finance, biometrics, and robotics make it seem like we should assign higher probabilities over time. <BR/><BR/>At the AI@50 conference 41% of attendees indicated that AI would never fully simulate human intelligence, 41% that it would but not for at least 50 years, and 18% that it would in less than 50 years. Many of those saying that AI will never be able to simulate every function probably have consciousness in mind, which is of little interest for my purposes. Nevertheless data like this push me in the direction of a probability distribution for AI development weighted heavily towards the further future. I don't outright adopt the central tendency of this opinion distribution, however, so as to take into account other factors, like wild card biotech enhancements to intelligence (which I think are generally not considered at all by scientists estimating progress in their fields for the 21st century).Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5956838.post-39599350233119987102007-10-24T14:54:00.000-07:002007-10-24T14:54:00.000-07:00"Utilitarian" wrote:> Varied incentives. . . might..."Utilitarian" wrote:<BR/><BR/>> Varied incentives. . . might not motivate people to expend<BR/>> the energy to. . . contribute to an important [area]. . .<BR/>><BR/>> I have already adjusted my understanding of scientific opinion<BR/>> for [the] silence [in mainstream scientific circles surrounding,<BR/>> presumably, MNT and/or AGI].<BR/><BR/>And come to a conclusion the opposite of mine, it would seem. Well, your conclusion<BR/>**is** the one popular among folks who contribute to on-line discussions<BR/>of these things. There are very, very few contributions from<BR/>people who (1) bother to think about these things at all and<BR/>(2) are not, or have ceased to be, "enthusiasts of some kind or another".<BR/><BR/>> Your point was. . . not dispositive.<BR/><BR/>So few are, in discussions of this kind. ;-><BR/><BR/>Well, YMMV, as they say.<BR/><BR/>Or, as Sir Thomas More says in _A Man For All Seasons_,<BR/>"The world must construe according to its wits."<BR/><BR/>And, as Elrond says to Aragorn, "The years will bring<BR/>what they will."jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-86490282731456596492007-10-24T12:55:00.000-07:002007-10-24T12:55:00.000-07:00James,Your point was clear enough, but old news an...James,<BR/><BR/>Your point was clear enough, but old news and not dispositive. Varied incentives of funding, status, career, etc might not motivate people to expend the energy to think about and debunk a worthless area, or conversely to contribute to an important one. When I have already adjusted my understanding of scientific opinion for a silence, you can't make the same evidence count double by repeating a known possible explanation for silence. That's why I described my interest in acquiring new evidence.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5956838.post-2535510239117368512007-10-24T10:20:00.000-07:002007-10-24T10:20:00.000-07:00Richard Jones wrote (inhttp://www.softmachines.org...Richard Jones wrote (in<BR/>http://www.softmachines.org/wordpress/?p=354 ),<BR/>quoting Alfred Nordmann (in<BR/>http://www.uni-bielefeld.de/(en)/ZIF/FG/2006Application/PDF/Nordmann_essay.pdf ):<BR/><BR/>> ". . .[T]he boundaries between science and science fiction<BR/>> are blurred,. . . and the scientific community itself at a<BR/>> loss to assert standards of credibility.” . . .<BR/>> [T]he more extreme the vision, the easier it is to sell to a<BR/>> TV commissioning editor. And, as Nordmann says:<BR/>> “The views of nay-sayers are not particularly interesting and<BR/>> members of a silent majority don’t have an incentive to<BR/>> invest time and energy just to 'set the record straight.'<BR/>> The experts in the limelight of public presentations or<BR/>> media coverage tend to be enthusiasts of some kind or another<BR/>> and there are few tools to distinguish between credible and<BR/>> incredible claims especially when these are mixed up in<BR/>> haphazard ways.”<BR/><BR/>This succinctly elucidates a point I was attempting (not very<BR/>successfully, I'm afraid) to make in an exchange with "Utilitarian"<BR/>in the comments of<BR/>http://amormundi.blogspot.com/2007/10/superlative-summary.html<BR/><BR/>-------------------------------<BR/>> ["Utilitarian" wrote:]<BR/>><BR/>> In my view, while Kurzweil, Bostrom, Yudkowsky, et al are very<BR/>> intelligent people, the key area for 'Singularitarian' activism<BR/>> now is getting people who are still smarter than them to examine<BR/>> these problems carefully.<BR/><BR/>You might as well be calling for the "people who are still smarter"<BR/>than Tom Cruise to be "carefully examining" the Scientologists'<BR/>case against psychiatry. You'll recall that Ayn Rand was piqued<BR/>that the mainstream philosophical community never deigned to<BR/>take her ideas seriously enough to discuss them. I suspect<BR/>that the really smart people simply have better things to do.<BR/>-------------------------------<BR/><BR/>"Utilitarian" wrote:<BR/><BR/>"You might as well be calling for the "people who are still smarter"<BR/>than Tom Cruise to be "carefully examining" the Scientologists'<BR/>case against psychiatry."<BR/>Unlike Cruise, Kurzweil has demonstrated both a high level of<BR/>intelligence and a strong grasp of technology. While his predictions<BR/>have included systematic errors on the speed of consumer adoption<BR/>of technologies, he has done quite well in predicting a variety of<BR/>technological developments (including according to Bill Gates),<BR/>not to mention inventing many innovative technologies. Bostrom has<BR/>published numerous articles in excellent mainstream journals and<BR/>venues, from Nature to Ethics to Oxford University Press.<BR/>Yudkowsky is not conventionally credentialed, but was a prodigy<BR/>and clearly has very high fluid intelligence.<BR/><BR/>The charge against these people has to be bias rather than lack of ability. <BR/><BR/>. . .<BR/><BR/>"I suspect that the really smart people simply have better things to do."<BR/>Yes, e.g. string theory, winning a Fields medal, becoming a billionaire.<BR/>These are better things to do for them personally, but not necessarily<BR/>for society. <BR/>-------------------------------<BR/><BR/>The problem is similar to the "coverage" of Creationism in the popular<BR/>media. If you scan through the FM dial on your car radio, chances<BR/>are excellent you'll come across a Creationist lecturer "demolishing"<BR/>the "pretenses" of Darwinism, telling you that "everybody" in the<BR/>scientific community knows that evolutionary theory doesn't hold<BR/>water (maybe even invoking the eighth-grade schema of the scientific<BR/>method that's been used here recently to "debunk" the categories<BR/>of psychiatric diagnosis, by pointing out that there's no possibility<BR/>of experimental confirmation of an evolutionary explanation for<BR/>the existence of life on earth).<BR/><BR/>The "silent majority" of professional biologists don't have an incentive<BR/>to invest the time and energy just to "set the record straight."<BR/>In fact, they might even be putting their careers as well as their<BR/>leisure time at risk by doing so.jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.com