tag:blogger.com,1999:blog-5956838.post3681309405755893457..comments2023-11-22T01:14:54.298-08:00Comments on amor mundi: GO FAI a Kite!Dale Carricohttp://www.blogger.com/profile/02811055279887722298noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-5956838.post-68829669558171128552014-02-28T10:01:06.238-08:002014-02-28T10:01:06.238-08:00More propaganda from the usual suspects:
http://h...More propaganda from the usual suspects:<br /><br />http://hplusmagazine.com/2014/02/28/saving-the-world-with-analytical-philosophy/<br />----------------<br />Saving the World with Analytical Philosophy<br />Ben Goertzel<br />February 28, 2014<br /><br />Stuart Armstrong, a former mathematician currently employed<br />as a philosopher at Oxford University's Future of Humanity Institute,<br />has recently released an elegant little booklet titled Smarter Than Us.<br />The theme is the importance of AGI to the future of the world. . .<br /><br />Armstrong wrote Smarter Than Us at the request of the<br />Machine Intelligence Research Institute, formerly called the<br />Singularity Institute for AI -- and indeed, the basic vibe of the<br />booklet will be very familar to anyone who has followed SIAI/MIRI<br />and the thinking of its philosopher-in-chief Eliezer Yudkowsky.<br />Armstrong, like the SIAI/MIRI folks, is an adherent of the school<br />of thought that the best way to work toward an acceptable future<br />for humans is to try and figure out how to create superintelligent<br />AGI systems that are provably going to be friendly to humans,<br />even as the systems evolve and use their intelligence to<br />drastically improve themselves. . .<br /><br />It's worth reading as an elegant representation of a certain<br />perspective on the future of AGI, humanity and the world.<br /><br />Having said that, though, I also have to add that I find some of<br />the core ideas in the book highly unrealistic.<br /><br />The title of this article summarizes one of my main disagreements.<br />Armstrong seriously seems to believe that doing analytical philosophy<br />(specifically, moral philosophy aimed at formalizing and<br />clarifying human values so they can be used to structure<br />AGI value systems) is likely to save the world.<br /><br />I really doubt it!<br />====<br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-51038843202119058982014-02-26T17:44:02.332-08:002014-02-26T17:44:02.332-08:00Goertzel needs to spend a decade or two in Nauru r...Goertzel needs to spend a decade or two in <a href="http://amormundi.blogspot.com/2010/10/nauru-needs-futurologists.html" rel="nofollow">Nauru</a> rethinking his priorities, once you've read Edelman there is little reason to wade into silly mind-ecologists and singularitarians, if you ask me. If one is looking for a satisfying balance between evo and devo and I think it goes a little something like <a href="http://youtu.be/jadvt7CbH1o" rel="nofollow">this</a>.Dale Carricohttps://www.blogger.com/profile/02811055279887722298noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-41163510108148879022014-02-26T17:34:42.923-08:002014-02-26T17:34:42.923-08:00Toward a Middle Way
I've presented a dichotom...Toward a Middle Way<br /><br />I've presented a dichotomy between symbolic and connectionist AI -<br />rule-based and neural-net AI. . .<br /><br />[This] glosses over the peculiar vagueness of the notions of "symbolic"<br />and "connectionist" themselves. . . There is a valid distinction between AI<br />that is inspired by the brain, and AI that is inspired by conscious reasoning<br />and problem-solving behavior. But the distinction between "symbolic" and<br />"connectionist" knowledge representation is not as clear as it's usually<br />thought to be. . .<br /><br />Of course, there are extremes of symbolic AI and extremes of connectionism. . .<br />[But] real intelligence only comes about when the two kinds of knowledge<br />representation intersect, interact and build on each other.<br /><br />I'm certainly not alone in coming to the conclusion that the middle way<br />is where it's at. For instance, Gerald Edelman, a Nobel Prize-winning<br />biologist, proposed a theory of "neuronal group selection" or Neural Darwinism,<br />which describes how the brain constructs larger-scale networks called<br />"maps" out of neural modules, and selects between these maps in an evolutionary<br />manner, in order to find maps of optimum performance. And Marvin Minsky,<br />the champion of rule-based AI, had moved in an oddly similar direction,<br />proposing a "Society of Mind" theory in which mind is viewed as a kind of<br />society of actors or processes that send messages to each other and form<br />alliances into temporary working groups.<br /><br />Minsky's and Edelman's ideas differ on many details. Edelman thinks<br />rule-based AI is claptrap of the worst possible kind. Minsky still upholds<br />the rule-based paradigm --though he now admits that it may sometimes be<br />productive to model the individual "actors" or "processes" of the mind<br />using neural nets. . . But even so, the Society of Mind theory and the<br />Neural Darwinism approach are both indicative of a shift toward a new<br />view of the structure of intelligence, one which I believe is fundamentally<br />correct. . .<br /><br />What Minsky and Edelman share is a focus on the intermediate level of<br />process dynamics. They are both looking above neurons and below rigid<br />rational rules, and trying to find the essence of mind in the interactions<br />of large numbers of middle-level psychological processes. I believe this<br />is the correct perspective, in large part because I think it is how the<br />human mind works. . .<br />====<br /><br />Interesting that Goertzel mentions Edelman so often in<br />his recent papers. He didn't used to think much of the guy,<br />IIRC.<br />jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-52985066004965837392014-02-26T17:33:37.165-08:002014-02-26T17:33:37.165-08:00> So, the "bottom-up" con-artists wan...> So, the "bottom-up" con-artists want us to take them seriously<br />> because they pretend they can deliver. . . "Good Old-Fashioned Artificial<br />> Intelligence"] without understanding intelligence as such, while the "top-down"<br />> con-artists want us to take them seriously because they. . .<br />> still to want to understand intelligence. . . even though they don't<br />> understand it any more than they ever did. . .<br /><br />Something like that. ;-><br /><br />But, as a very Smart person once said, we have to find the<br />right balance between the Evo and the Devo.<br /><br />Here's something from one of the, er, horses', er,<br />mouths:<br /><br />http://www.goertzel.org/books/DIExcerpts.htm<br />------------------<br />Nets versus Rules<br /><br />When I first started studying AI in the mid-1980's, it seemed<br />that AI researchers were fairly clearly divided into two camps,<br />the neural net camp and the logic-based or rule-based camp.<br />This isn't quite so true anymore, but in reviewing the history of AI,<br />it's an interesting place to start. Both of these camps wanted<br />to make AI by simulating human intelligence, but they focused<br />on very different aspects of human intelligence. One modeled<br />the brain, the other modeled the mind.<br /><br />The neural net approach starts with neurons, the nerve cells<br />the brain is made of. It tries to simulate the ways in which these<br />cells are linked together, and in which they achieve cooperative<br />behaviors by nonlinearly spreading electricity among each other,<br />and modulating each other's chemical properties. . .<br /><br />Rule-based models, on the other hand, try to simulate the mind's<br />ability to make logical, rational decisions, without asking how the<br />brain does this biologically. They trace back to a century of revolutionary<br />developments in mathematical logic, culminating in the realization<br />that Leibniz's dream of a complete logical formalization of all knowledge<br />is actually achievable in principle, although very difficult in practice.<br /><br />To most any observer not caught up on one or another side of the debate,<br />it's obvious that both of these ways of looking at the mind are extremely<br />limited. True intelligence requires more than following carefully defined<br />rules, and it also requires more than random links between a few thousand<br />artificial neurons. . .jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.com