I said, at some point or other, that "I think the words 'smart' 'intelligent' [and] 'act' shouldn’t be used literally to describe mechanical behavior."
In response to which I received from "Roko": So, the thinking process isn’t limited to only biological organisms, but the thinking process isn’t a mechanical computing either? It’s something we can’t imitate with any kind of a machine? Do I understand you correctly?
In response to this same statement of mine "Thom" responded: [Y]ou are directly contradicting literally the whole field of artificial intelligence. From the wikipedia article on AI:
Artificial Intelligence (AI) is the intelligence of machines and the branch of computer science which aims to create it. Major AI textbooks define the field as “the study and design of intelligent agents,” where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.
I do indeed regard "the thinking process" as one limited to biological organisms, as a factual, empirical matter in the present, except in science fiction and futurological handwaving in which it has exactly the same substantial existence as do the ghosts and wizards and wands in Harry Potter books. The forms that actually-existing intelligence takes in the world should surely matter to people who claim to care about science as much as Robot Cultists claim to do. Among many other things, it is true that I find the term “thought” to be one that encompasses more personal experiences and worldy phenomena than just reckoning with consequences in my view. Part of the way I try to get at this difference is to stipulate a pretty conventional distinction, at least in technical philosophical debates turning on the present issues, between the "acts" of subjects (as relatively free actors) and the "behaviors" of objects (as mere mechanical playthings): I don't claim that this distinction always accords with common usage, but I think it often does, and at any rate helps us get at a difference that makes a difference to us in moral, aesthetic, ethical, and political matters.
Is it right to say of this move of mine that in it I am "directly contradicting literally the whole field of artificial intelligence"?
I would distinguish, in a rough and tumble sort of way, the actual software-coding practices, useful general principles, and testable results of that field, on the one hand, from the rhetorical practices through which narrative and figurative frames are mobilized to educate, inspire, and make sense of practices, principles, and results of that field, on the other hand. The notion of "artificial intelligence" as it presently plays out in the world depends for its force on conceptual confusions and ill-digested metaphors as far as I can see.
You will notice that shade-aversive plants are no more intelligent than machines that also satisfy the definition of intelligence cited above. Further, it is rightly a matter of some controversy whether we should impute “success” to the behavior of a system that has no personal stake in its accomplishment. And, again, I have used the word “behavior” because I want to distinguish in this context acting as a political term from behavior as a more conventionally causal one. Strictly speaking, it is not a denigration of the useful results arising out of computer science (even when it decides it wants to call itself "artificial intelligence" without reason or sense), to point out that there are practical, conceptual, ethical, and other problems afoot in the ways in which computer science sometimes goes on to make sense of what it is doing and where it is going.
No comments:
Post a Comment