Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Monday, October 29, 2007

Debating Singularitarians

I keep thinking I'll take a break from the discussion of Technological Superlativity, but all the energetic conversation keeps pulling me back in. This is upgraded and adapted from Comments from yesterday's post. There's quite a lot of interesting and contentious debate happening there, well worth reading. As always, thanks for all the comments, everyone.

Friend of Blog Michael Anissimov points out that: SIAI [The Singularity Institute for Artificial Intelligence for those who don't know -- something like Robot Cult Ground Zero as far as I can make out, and with some rather prominent "transhumanist" folks on board] works towards Friendly (through whatever means works, something other than mathematical-deductive if necessary) seed AGI because the people in the organization see it as a high moral priority.

Michael, how's about a nice definition of "Friendly Seed AGI" for the kids at home? In a nice little sentence or two, without frills. The terminology isn't at all widespread, as you know -- especially among professional and academic computer scientists. Of course, I have my own sense of what this "high moral priority" amounts to, in fact, but I'd like to hear it from you. As extra credit, I'd be curious if you could define "intelligence" (a concept on which "Friendly Seed AGI" depends) in a comparably pithy way.

Michael continues: This is humanity's first experience of stepping beyond...

Cue the music.

The question is not "if" intelligence enhancement technologies will be available, but "when".

Actually, there is still quite palpably a question of "if," when we are talking about whether the Strong Program of AI (in any of its current variations) will bear fruit, in fact. And, yes, Virginia, one can say that while still maintaining that human intelligence is an entirely worldly non-supernatural phenomenon. By the way, quite apart from the fact that the question of "if" actually does remain on the table for anybody with any sense, at least enough so to prompt caveats in one's pronouncements on the topic, there also remain questions as to when the question of "when" might as well amount to the question of "if" due to the timescales and complexities involved.

Now there's nothing at all wrong with these conventional human patterns,

Gosh, that's big of you. And I for one would like to thank our future Robot Overlords...

but we have to note [we are compelled to note, by some unspecified necessity -- believe me, it isn't logic] that the introduction of enhancement technology is bound to [again the conjuration of necessity, certainty -- bound to by what exactly? where from? One wonders.] throw the existing order out of whack.

"The Existing Order Out of Whack." A heady vision, to be sure. You say, "out of whack," I notice, as if to suggest complete transformation, derangement, confusion, unpredictability, but my guess is you think you have a pretty clear idea of how it's gonna go down, Michael, when all is said and done. This reminds me of my reading of the short story "A Gentle Seduction," a few days ago -- Jack claims that the Singularity will involve unprecedented unfathomable change, but the truth is he already knows everything that will come to pass with the clarity of an Old Testament Prophet, and hence confronts the scrambling of everybody else's world with relative equanimity. This is how True Believers always feel about their Pet Raptures, of course.

What I enjoy about this spectacle is all the misplaced certainty and necessity of the phrasing, never with much in the way of admitting how freighted all of these pronouncements are by caveats, qualifications, unintended consequences, sweeping ignorance of fields of relevant knowledge, indifference to historical vicissitudes, and so on. All that stuff is bracketed away or perhaps never even enters the Singularitarian Mastermind in the first place, and only the stainless steel trajectory to Singularity luminously remains.

It shouldn't be hard to imagine

Of course not. We've all read sf, watched sf movies and tv shows, seen the ubiquitous iconography on commercials, etc. No, it isn't hard to "imagine" at all.

that enhanced humans or AGIs won't get to the point of being substantially smarter than the smartest given humans.

Smarter -- how? Of just what does this smartness consist? How many dimensions can it have? How do they relate to one another? How does its embodiment enable and delimit it? But bracketing all that stuff for a moment, I have to wonder why Singularitarians seem to be so little interested in the actually existing facts on the ground that there is greater "intelligence" in cooperation, in non-duressed functional division of labor, in digital networked p2p production already, here and now in the real world? Why not devote yourself to unleashing the intelligence humans already palpably demonstrate a capacity for, a desire for, in the service of shared problems that are all around us?

Why do you think I advocate a basic income guarantee? Of course it's the right thing to do, of course it provides a basic democratizing stake protecting all people from exploitation by elites, but also it would function to subsidize citizen participation in p2p networks, creating, editing, criticizing, organizing in the service of freedom.

So much of the Superlative Singularitarian Robot God discourse just looks to me like a funhouse mirror in which symptomatic hopes and fears are being expressed (fine as far as that goes -- it's like literature in that respect, to the study and teaching of which, you may have noticed, I have devoted no small part of my life), superficially and parasitically glomming on to a few scattered software and robotic security problems, handwaving some generalized millennial anxieties about technological change via some utterly conventional science fiction tropes, and then selling the moonshine -- whether earnestly or cynically is a matter of "if the shoe fits, wear it" -- to the masses, tossing out the misleading oversimplifying usually anti-democratizing frames into public discourse, hoping to skew budgetary priorities from more urgent needs, likely deranging the form funding and research take as they apply themselves to actual problems of malware and automation (rather as cybernetic totalist ideology deranges coding practices already), and so on.

If intelligence enhancement tech really does produce a superintelligence, then we have a moral duty to maximize the probability that said superintelligence cares about humanity as a whole, not itself or any narrow group of humans. Otherwise the outcome could be grim. A few thousand Europeans enslaved native populations of millions with "only" somewhat more advanced technology

If if if if if if if if if if if if if if if if -- and then, miraculously, the conjuration of global devastation and enslavement. You just cannot know how clownish and cartoonish this appears outside the bubble of True Belief. A scenario that once caveated is diminished into near total irrelevance is instead hyperbolized back into pseudo-relevance. You might as well be talking about when Jesus comes or when the flying saucers arrive.

Now, before you inevitably misread the substantial force of what I am saying here as yet more evidence of my lack of vision and imagination, or my lack of scientificity and know-how (these two critiques are the most common ones so far -- I wonder if their proponents have noticed that they are making literally opposite claims)... let me stress that to the extent that software actually can and does produce catastrophic social impacts (networked malware, infowar utilities, automated weapons systems, asymmetric surveillance and data-manipulation and so on) these are actual problems that should be addressed with actual programs on terms none of which are remotely clarified by the Superlative Discourse or the sf iconography of entitative post-biological superintelligent AI or eugenicized intelligence-"enhancement."

It's not that I utterly "discount" the "5% risk" you guys constantly use to justify passing the collection plate at your Robot God Revival Meetings (even though, truth be told, you pull that number out of your asses and can't yet even define basic terms -- "intelligence" "friendliness" -- on which you depend, at least not to the satisfaction of Non-Believers in the very fields you claim to lead as visionary sooper-geniuses), nor do I claim "certainty" that nothing like the scenarios that preoccupy your attention "will" or "can" come to pass.

My critique of the Singularitarian Variation of Superlativity has never taken that form. That's because, not to put too fine a point on it, I don't think you guys are ready for prime time critique in that vein. While you are playing at being scientists, most of your discourse has far too much Amway and Heinlein in it to really qualify for that designation by the standards I am familiar with (bad news for you: I have actually taught courses in the history and philosophy of science -- I know that will come as a shock to those of you guys who like to dismiss me as an effete elite aesthete too muzzy-headed to grasp the hard chrome dildo of your True Science -- and I fear that by my lights what you guys are spinning is scarcely science, apart from some occasional nibbling at the edges).

But, anyway, again it's not that I dismiss your various likelihood and timeline estimations (considering them more as a line of hype than real efforts at science in the main), so much as that I think such risks as one can actually reasonably attribute to networked malware and lethal automation and the like are best addressed by people concerned with present, actually emerging, and palpably proximately upcoming technodevelopmental capacities rather than uncaveated and hyperbolic Superlative idealizations freighted with science fiction iconography and symptomatic of the pathologies of agency very well documented in association with technology discourse in general.

(Some advice, one day, when the mood strikes you, you might read some Adorno and Horkheimer, Heidegger, Arendt, Kuhn, Ellul, Marcuse, Foucault, Feyerabend, Winner, Latour, Tenner, Haraway, Hayles, Noble, for some sense of the things people know about you that you don't seem to know very well about yourselves as technocentrics. You won't agree with all of it, as neither do I, but if you take it seriously you will come out of the experience feeling a bit embarrassed about the unexamined assumptions on which Superlative Technology Discources always fatally depend.)

So, the idea is to "get them while they're young": create superintelligences with altruistic goal systems. SIAI is the only organization pursuing this goal in a structured manner.

Yes, yes, I know. The idea is that the Singularitarians are the Good Guy sooper-geniuses in the thankless role of saving humanity from the Bad Guy sooper-geniuses who by design or through accident will create the Bad Robot God who will destroy or enslave us, while you want to get there first and create the Good Robot God who will solve all of our problems ('cause, he's infinitely "smarter," see, since "smartness" is a reductively instrumental problem-solving capacity and problems are reductively solvable through the implementation of instrumental rationality) and save the world. It's like a Robot God arms race, a race for time, urgent, in fact nothing is more urgent once you "grasp" the stakes, hell, billions of lives are at stake, etc etc etc etc etc. Complete and utter foolishness. But, of course, very "serious." Very "serious" indeed.

Look, if an SIAI Singularitarian looked to be on the verge of creating anything remotely like its Robot God (just bracketing for a moment the deranging conceptual entanglements of talking in these terms in the first place), you can be sure the Secret Lab or what have you would be closed down immediately and the people involved thrown in jail as net-criminals or even terrorists (and a good job too) as far as I can tell. Otherwise, it would only be corporate-militarists themselves who would have the resources and the will and the authorization to create such a "thing." They certainly would use advanced lethal automation and malware and infowar utilities for malign purposes (as they do almost everything already).

If you were really serious about Unfriendly Robot Gods or their non-superlatized real-world analogues, you would be engaging in education, agitation, and organizing to diminish the role of hierarchical formations like the military in our democratic society -- demanding an end to secret budgets and ops, making war unprofitable, supporting international war crimes and human rights tribunals, and so on. That anti-militarist politics coupled with support of international and multilateral projects to monitor and police the propagation of networked malware, stringent international conventions on automated weapons systems, and similar politics is what a Technoprogressive sounds like on this topic.

Nothing is clarified by the introduction of Superlative Discourse to such deliberation, only the activation of irrational passions in general, and an increased vulnerability to hyperbolic sales-pitches and terror discourse of a kind that incumbent interests use to foist absurdly expensive centralized programs down our throats to nobody's benefit but their own.

31 comments:

Michael Anissimov said...

Dale,

I have to say right off that the somewhat disrespectful way you engage in discussion makes me less motivated to spend time on it. For instance, "Cue the music", "Gosh, that's big of you", "True Believers always feel about their Pet Raptures", etc etc etc., shows you aren't really taking my opinion or statements very seriously at all. In your responses, your tone doesn't even address me directly, it sounds more like an attempt to mock me in front of some sympathetic third party audience. Such a disrespectful way of interacting would be frowned upon at a round table meeting or in a classroom context. At a cocktail party, it would cause someone to simply walk away.

I am interested in your criticisms of Singularitarianism because I believe they reflect the concerns of a wider group of people who are silent. But, I find it difficult to engage with your venomous and sarcastic tone. I wish we could talk at least under the pretense of mutual respect. (I have respect for your ideas but the inverse clearly does not apply.)

I don't have a clear idea of what goes down when we create human-equivalent AI or enhanced human thinkers. I wish you wouldn't put words in my mouth and claim that I do have an idea, because I don't. There are a range of possible outcomes, but it's useless to delve into them if one doesn't even believe the underlying premise: that significant intelligence enhancement is technologically possible.

I'm barely even into SF. I don't watch much television or many movies either. I watch anime that is mostly fantasy, not sci-fi. I got into transhumanism by reading non-fiction books. Most fiction deals with AI in a really anthropomorphic way, so it doesn't factor into my thinking about the future of AI in the real world. I dislike much sci-fi and often give it negative reviews, like a negative review of Accelerando I wrote about a year ago.

I don't think you have a lack of vision or imagination. I disagree with Roko's critique of you in his recent post.

I don't think that the risk of rogue AI is only 5% , but substantially greater than that. Like James Hughes and many other transhumanist philosophers, I believe human-level AI is likely to be developed in the first half of the coming century.

Problems are not "reductively solvable through the implementation of instrumental rationality", but using a variety of techniques such as communication, charisma, creativity, research, brainstorming, and experiments. Any AI of any use would need to possess all these characteristics, or it wouldn't truly be human-equivalent. If it does possess these characteristics, then it could certainly help with human problems.

I am sympathetic for many of the causes you list, but think that implementing many of them would be nearly impossible. For instance, the USA is never going to put itself in a position where its politicians or generals are subject to punishment by international courts, no matter how much we try to institute such a structure. Making war unprofitable is incredibly difficult, and I'd advocate a technological solution -- clean, abundant energy through solar power. The US military is never going to reveal its precise budget, that is a fantasy. And military funding is not going to be appreciably decreased, because since 9-11 people are on edge. Russia is threatening us, Iran is threatening us, China is highly militarized, etc. I have opinions on all the causes you mention but I think that politics as usual is not the way to go about it.

Technological solutions, such as increasing transparency, will help circumvent impasses that have held since the beginning of civilization. Humans have had basically the same motivations since ancient times, but technology changes. Technology such as air travel has fundamentally changed the way people worldwide interact. I think that transparency (facilitated by technology) coupled with demands for government accountability (facilitated by activism) will reduce militarization, but the technological component is critical. If it were so easy to get the militaries of the world to put down their arms, it would have been accomplished a long time ago. There is too much international tension. An eventual world government or strengthening of the UN could help in this regard.

To realize many of your political goals would require that the Republicans in this country magically disappear, which isn't going to happen. I advocate some centrist political positions because I see them as a possible equilibrium to demands from opposing sides of the political spectrum. Many of your socialist ideas could never be successful, they merely aggravate the conservative crowd and cause the pendulum to swing in the other direction. I am a liberal but I am also being practical.

Dale Carrico said...

This is your latest move? To whine that I'm being mean and say that I'm too immoderate compared to the "centrists" in a Robot Cult filled with libertopians, reductionists, and cosmic survivalists?

But I'm a leebrul, really and for true! Stop being so mean to our Republican friends with your eeevil socialism! Gimme a break.

In addition to providing a fully elaborated sociocultural critique of Superlative Technology Discourses it is also just as true as it has always been that I am given to ridiculing what I take to be ridiculous. This is nothing new and you can scarcely pretend not to know all this already.

Anonymous said...

I see that you're training for the 2008 Keyboard Warrior World Championchips, Dale. Looks good, although you might be in trouble if a certain William Dembski shows up.

Anonymous said...

ChampionShips. So, there.

Anonymous said...

"corporate-militarists"
Dale,

What precisely do you mean by this term? Military spending and military contractors form only a very small portion of the corporate world. If Google were to create an AI on its own, without military supervision, would you still characterize this as a "corporate-militarist" creation?

Dale Carrico said...

Corporate-militarism is my preferred term for "neoliberalism."

As for what word I would use if Google created an "AI" -- whatever that's supposed to mean -- I'll cross that bridge when we come to it.

Anonymous said...

"Otherwise, it would only be corporate-militarists themselves who would have the resources and the will and the authorization to create such a "thing.""

"As for what word I would use if Google created an "AI" -- whatever that's supposed to mean -- I'll cross that bridge when we come to it."
It seems that by using the term in discussion now, you've come to that bridge, and ought to make your meaning clear.

Anonymous said...

http://en.wikipedia.org/wiki/Neoliberalism
In general neoliberalism seems to be pretty clearly defined in terms of economic policies, rather than military ones. Certainly it's not the same thing as neoconservatism, and many neoliberals about economic policies would like to see the U.S. military downsized and its role greatly limited. So why the 'militarist' in 'corporate-militarist'?

Is this an idiosyncratic use to refer to Bill Clinton/DLC types?

Dale Carrico said...

In general neoliberalism seems to be pretty clearly defined in terms of economic policies, rather than military ones. Certainly it's not the same thing as neoconservatism

Neoliberalism is not the same thing as neoconservatism, indeed, but I agree with David Harvey (among others) that they are closely interconnected historical phenomena, and often outright interdependent. I am also persuaded by Mike Davis's scholarship of this deep interdependence. Market fundamentalist ideology is happy to disdain welfare -- except welfare for the rich stealthed as Defense, happy to argue against coercion except for the police who would duress the exploited into accepting wage slavery against their best interests. It's an old story.

It's true that I am happy to use the term, among other things, to indicate that there are deep continuities between the attitudes and policies of corporate DLC democrats and Movement republicanism, often connected to Third Way politics in the general skew of public discourse ever rightward (a skew that doesn't reflect at all the progressive attitudes on specific issues of the mainstream throughout this entire corporate-militarist epoch, in the midst of rising cynicism, lowered trust in governance, lowered quality of life, and rising discontent).

Anonymous said...

"Market fundamentalist ideology is happy to disdain welfare -- except welfare for the rich stealthed as Defense"
This touches on what I was getting at above with the mention of the small role of military spending relative to the overall corporate sector. Empirically, military spending is rarely a primary means of transferring wealth to the rich.

Very little of total military spending in the U.S. actually goes as welfare to the rich: military salaries certainly don't qualify, and payments to contractors like Boeing mostly go to middle class employees and stockholders (often pension and mutual funds). Depending on how you account for the tax system (i.e. whether you count failures of the tax system to be maximally progressive as welfare) particular tax breaks dwarf the scale of military pork going to the wealthy (much goes to Congressional districts to bribe the electorate). Contractors like Blackwater and Haliburton create some new opportunities but the total scale of corporate welfare there remains small relative to the total across all sectors (farm subsidies, oil subsidies, copyright extensions, tax treatment of carried interest, etc, etc).

In countries that really are oligarchies crafted to transfer wealth to an elite non-military mechanisms predominate, e.g. in Mexico the granting of telecom and other monopolies to the Slim family. In Africa natural resources revenues and their theft loom large.

Criticizing military spending with every rhetorical tool at hand will often be reasonable, since military activity is often so terrible, but spending there does not seem to be intrinsically much more weighted towards benefiting rich incumbents. Heck, vast amounts of foreign aid, among the very best spending whatsoever, is diverted to purchase from selected American suppliers at greater cost than purchases in the countries being targeted for aid.

Dale Carrico said...

"As for what word I would use if Google created an "AI" -- whatever that's supposed to mean -- I'll cross that bridge when we come to it."

It seems that by using the term in discussion now, you've come to that bridge, and ought to make your meaning clear.


I've made very clear that in the Superlative Technology Discourses I am critiquing -- both in ways very familiar from decades of criticism of the facile failures of the Strong Program of AI and in ways that are more idiosyncratic to me due to the novelty and marginality of Singularitarianism in particular -- that to the extent that "AI" is meant to refer to an entitative post-biological superintelligence arriving sufficiently near-term to demand we skew budgetary and policy priorities away from easily demonstrated urgent needs to address its arrival, that such "AI" is [1] conceptually confused, [2] sublimely indifferent to the contexts that invigorate its discourse, [3] deeply freighted with essentially religious significance, [4] caught up in the identificatory energies of a marginal and defensive subculture (so marginal and so structurally connected to specific membership organizations that the better term for the subculture for some of those who so identify is simply "cult"), [5] deeply imbricated in corporate-militarist "developmental" frames that, coupled with its tendencies toward technocratic elitism and messianic sense of existential risks that only it properly understands, lends itself especially to centralized, industrialized, imposed solutions that preferentially benefit incumbent interests to which I am opposed as a champion of democracy.

It isn't clear what "bridge" I am supposed to have arrived at. It's not clear to me why you would claim I haven't made my "meaning clear" in such matters. (I'm sure, for example, in reading the laundry list preceding you will have the feeling that you have heard all of this before: this is because, you have; and this is because: I say these things over and over again; and this is because: by saying these things over and over again I make my meaning clear; and I have done.)

The reason I hesitate to blithely say what I'd call a Google "AI" is because it isn't clear at all to me what saying these words for a crowd of mostly lurking Singularitarian-identified Robot Cultists will even mean to them.

I am not prepared to concede any ground to the figures and frames of Singulariatarians that seem to me incoherent at their base: I have not yet been convinced that Singularitarians have a basic grasp on the idea of intelligence as such, its social character, its embodied character, nor do they seem to me necessarily aware of the complexities that are actually mobilized by their glib deployment of morally freighted uses of words like "friendliness"; nor do I have any sense at all that they grasp any number of key political questions connected to their discourse, why technological determinism is false, why technology is not socially autonomous, why technocracy is deeply anti-democratic, why risk discourse can be dangerously anti-democratizing (that they can be unaware of this in the midst of the so-called Global War on Terror, a classic demonstration of the dangers of this kind of hyperbole attacking the very root of American democracy doesn't say much that is good about their default political temperament -- especially in light of the history of Randianism, anarcho-capitalism, crytpo-anarchism, Extropianism and so on that is historically quite closely correlated to many of the very same people who now figure among the Singularitarians and among the canoncial texts they still recommend).

Why should I accept discussion on terms controlled by reactionary reductionist techno-utopians in a marginal Robot Cult, exactly? They may not be able to grasp their objective situation but I certainly can.

Dale Carrico said...

Empirically, military spending is rarely a primary means of transferring wealth to the rich.

I'm curious, would you agree that the the computer industry in California is utterly indebted to military spending in fact? The interstate highway system? What about the industrial infrastructure created by WWII?

I am curious about the rhetorical force of the word "empirically" with which you began this utterance: isn't an enormous amount of work being done here by initial assumptions and definitions that are not themselves empirical necessarily?

Anonymous said...

I take corporate welfare or welfare for the rich to mean policies that preferentially enrich the favored groups at the cost of the public fisc/general welfare. In assessing the corporate welfare component I look at how much it actually enriches the favored groups,

If the Bush tax cuts enrich their targeted constituencies by hundreds of billions of dollars, while only billions or tens of billions of dollars out of many hundreds of billions of dollars in military spending go to enrich executives in military contractors, then I think that the former policy is a more significant tool of wealth transfer to the rich. I don't think the underlying assumptions there are terribly controversial (for those who treat the Bush tax cuts as redistributive). I used the term 'empirically' because I think the actual amount of money handed over to the rich matters.

Since the interstate highway system produced large positive externalities and enriched the citizenry as a whole I wouldn't characterize it as welfare for the rich, except insofar as the government overpaid contractors more than was required to get the work done or allocated the contracts to political allies.

The computer industry's development has definitely depended on government demand for its products and especially government funding for R&D. On the other hand, that R&D was a great investment for the country and the planet as a whole, and competition among firms limited the quantity of rents dispensed to politically connected firms (although certainly quite a lot of rents were handed out). If we get to the point of attributing all profits earned anywhere using computer technology descended from military-funded inventions as welfare for the rich, then there will be a lot of such welfare, but I would have to revise my negative attitude towards 'corporate welfare'.

Anonymous said...

Dale, do you have another post detailing how the Singularitarian view of intelligence is incoherent? You keep making this claim, but I never see an explanation beyond "it neglects the social and embodied elements".

(Personally, I think a lot of rhetorical confusion could be avoided by using the words "optimization process" instead of "artificial intelligence". Non-anthropomorphism and all that.)

Anonymous said...

From: Chimera Proxyment
To: Michael Anissimov and Dale Carrico

Both of you should collaborate. Opponents in War-Debate for all the differences they use to justify conflict more often than not share the identical signature of function.

The territory they fight over must thereafter fall under the government of the victor. The leadership must manage the same issues. The loser often only loses temporarily, and the actual-real work and resulting power come from their efforts. Movements directed by a lordship will carry traits of the victor, yet the working culture forming the basis of development digesting these changes remains virtually the same people/thoughts/issues as before and from this future leadership emerges. On this basis, losing populations and ideas often remain unchanged in basic character and form a resilient pattern in the affairs of men and ideas. They merely come back in a newer and stronger form later.

Superior victory comes with the 'winner' recognizing the preceding. Instead of vying for absolute and control instead looks to the true well being of the whole: winners-defeated-and those disinterested in conflict yet fall under the sway of the pattern both forces create by their activities. In civilized discourse about ideology and functional philosophy, conflict destroys the integrity of both sides: only what comes after unified and whole will continue. Protracted engagement can well define unwholesome characteristics to any group or vision while another victor who can make this leap in perspective may do-away with the whole of a floundering cause and re-create from the ashes of two previous combatants something 'bigger-and-better'.

What I see:
Visionary (very useful function for directing the Telemetry of subjects involved within a movement)
Practical (extremely useful to deal with the here and now to obtain satisfactory working performance)

Neither must 'bow' to the other and working in harmony both possess vital functional dynamics along with limitation. Should conflict continue; each in turn demanding change in the other: Both can perform dysfunctional. If a practical thinker does not actually conform to a vision but instead merely uses his ideological platform for a resource with which to attack visionary 'different': then why the effort?

Don't you have practical 'here and now' things to take care of? If the visionaries concerns possess flaws to great in magnitude to manage then the followers and vision will eventually die-out. So why make any effort to attack the failed position and structure of the ideology? If however you expend energy on an attack rather than your own visions: perhaps the underlying motivation stems for a certain need not brought to consciousness: 'looking for a visionary?'

In my opinion (I respect you both) two minds coming together over defining a future will guard both of you from some other ideological raider using the already strong Transhumanist movement that seems somewhat mired accessing the conflict between that vision and the here and now and putting both of your philosophies into Kant-like rubbish-bin.

You two must feel much more is at stake here than either would outright admit. I agree that you do. These issues will impact everyone who uses technology in the next few decades. Do you really possess the abilities between the two of you to effect change? Both sides coming together would become useful for the whole while I only see conflict damaging the issues on both sides. Try to see a continuum of issues and a whole rather than points of contention. If nothing ‘works’ at this point: then I would leave the arena. Consider yourself just a little wiser by avoiding entrapment for nothing one can gain.

VDT said...

Corporate-militarism is my preferred term for "neoliberalism."Oh I see.

I always thought "corporate-militarism" was your term for the iron triangle of the military-industrial complex. Was I wrong?

http://en.wikipedia.org/wiki/Iron_triangle

http://en.wikipedia.org/wiki/Military-industrial_complex

Robin said...

Damn, Dale. You shouldn't have to beat down the Robot Revival tent on one side and the RonPaulies on the other. Common thread? Science Fiction.

I just wanted to come in and applaud that list of authors you mention (Adorno and Horkheimer, Heidegger, Arendt, Kuhn, Ellul, Marcuse, Foucault, Feyerabend, Winner, Latour, Tenner, Haraway, Hayles, Noble) some of whom I despise and some of whom taught me some important things about science in spite of my not wanting to learn it.

But I wouldn't hold my breath that anyone who takes serious the strong AI claims is going to seek out (or understand) much of what they might read in those texts. I was trained in theoretical strong AI, and I only read those books because I was forced to. And then it took another few years before I could admit how valuable they'd been. (This is also why it pains me to see these constant arguments between you and the strongAI folk - you're working with an understanding of the world that is vastly different than theirs. It's almost painful sometimes to see the clash of these 2 worldviews. Painful because I don't see a bridge between them, and valuable beyond belief for what that teaches me!)

jimf said...

Robin wrote:

> This is also why it pains me to see these constant arguments
> between you and the strongAI folk - you're working with an
> understanding of the world that is vastly different than theirs.
> It's almost painful sometimes to see the clash of these 2 worldviews.
> Painful because I don't see a bridge between them, and valuable
> beyond belief for what that teaches me!

It's necessary that **somebody** should be providing the critique
that Dale is. In the on-line >Hist precincts, that is. There are
others who could be making a stab at it, but they seem to have
decided that they can't risk burning the bridges.

"Qui tacet consentire," as Sir Thomas More reminds the court in
_A Man For All Seasons_. That would not be a good idea in this
case.

Dale Carrico said...

Robin, about that bridge between the perspectives clashing here? Three words: "Dewey, Rorty, James." Three more: "in that order."

Dale Carrico said...

Giulio: I like to see polemics peppered with obscenities. It feels so homey. By all means continue to write as the spirit moves you. As you pointed out, I certainly do.

Dale Carrico said...

Vladimir: I would say there is a tight interdependence between the military-industrial complex and "neoliberal" ideology -- see Naomi Klein, Mike Davis, Amy Goodman, and the usual suspects.

Robin said...


Robin, about that bridge between the perspectives clashing here? Three words: "Dewey, Rorty, James." Three more: "in that order."


Mmm. Well-taken.

Of course, there's also someone like Hilary Putnam who went from the Strong AI functionalism to pragmatism, but wound up with an utterly incoherent worldview because he tried to hold on to the best parts of both systems (I worry about this EVERY DAY).

And I didn't mean to imply Dale shouldn't be taking this challenge on. I LOVE reading these exchanges, no matter how painful it can be to see people talking past one another (or rather, actively listening in the other direction, maybe?)

Dale Carrico said...

Robin, this is exciting news! I've been yearning to meet just the right person to explain to me the attractions of Putnam. Somehow he doesn't ever seem to translate for me somehow -- and you're talking to someone who can enjoy Sellars, Quine, Davidon, etc. Putnam? Never seems to happen. We should talk this over, over champagne cocktails!

Anonymous said...

Do need to sign a form?

Perhaps fall under the designation consanguineous, ibidem, or just apud in order to obtain even minor response to the point attempted for consideration?

I admit; would I that in observation went to the point of dismissing a party for dogma come to the conclusion no bridges of commonality existed – ‘at-all’; then I would ascertain my true motive carried no constructive value and possible dogma of my own.

‘Robot-Gods’ or no…The projections for a hard-line do exist. A conflict between the ‘here-and-now’ and ‘any’ telemetry-model-prophesy create situations where ‘dysfunction’ within and between models eventually becomes superseded by the next ‘better-suited’ one. You both really share more in common than you may think.

Do not forget the powerful attribute within which men using self-fulfillment techniques to actually take control of situations of leadership as they become manifest and direct even future potential phases of change outside of the awareness of even adherents. The opponent ‘Always’ looks delusional to the subject-in-conflicts ideological framework.

Remember the way the field of Cybernetics mysteriously ‘changed’ and then seemed to disappear limited to simply self-help books and prosthetics? An entire field of correlated Mathematics and potential impacts taken for the parts into other more limited fields of application; yet epistemology remains the only review of the fundamental nature of these ideas.

Bell’s Theorem + Evolution of Binary-Language (thru Set-Theory) – Noise = A possible means where a sufficiently advanced form of consciousness evolved can escape the time-space limitations. Sure: like how we package space and time and thereafter affect the ‘real-time’ of an action – or – the observer effect in Q-Mechanics.

We may not be able to yet transfer and decode information faster-than-light. Sufficient technology and evolution of intelligence at ‘any-time’ may well come to do a sort of ‘packaging’ of its own and by this means arrange our very own telemetry before our awareness cues us in.

We all seek meaning and ‘completion’. Why?

Why would humans even desire to create an AI?

Why Salamanders?

Robin said...

Dale, it's a date. Joshua keeps trying to drag me down there on one of his many business trips. Next time I'm not in a panic of writing, I'll come down and we can drink and talk Putnam :)

ZARZUELAZEN said...

As regards the SIAI and it’s founder, E.Yudkowsky.

Here’s a typical stream of Yudkowskian wisdom off the ’Future of Humanity Institute’s’ blog. In this thread, Yudkowsky claims he knows what morality is:

“Long ago, I believed that morality came from outside me, like a great light in the sky, as Terry Pratchett put it. I didn't believe in God, but I believed in morality. If there was no morality, why, that whole case had utility equal to zero, by assumption, so those possibilities cancelled out of the equation - no point in betting on them.
Then I considered, really considered for the first time, the case where I knew for absolute certain that there was no objective morality - which to me meant no morality at all, no "rational" decision. And it came to me that, even so, I would still choose to save people's lives.
Then I realized that *was* morality”
Ref: http://www.overcomingbias.com/2007/10/who-told-you-mo.html#comments

Compare with this paragraph:

‘Long ago I believed that the laws of physics came from outside me…..
Then I considered the case where…. There were no objective laws of physics…. And it came to me that even so, physical objects still moved….
Then I realized that *was* the laws of physics’

--

Spot the fallacy in the second paragraph above? The movements of physical objects are *not* equivalent to the laws of physics. Nor is there any reason for believing that human choice is equivalent to morality. The arguments above are of course completely empty of rational content, but merely ‘proclamations’ offered without supporting evidence. If paragraphs like the above were passed off as arguments in the undergraduate computing course I’m on at the moment, the person who wrote them would be booted right out of the course on their arses on the spot.

Let me give you all a tip. When someone claims to know all the answers to moral questions and sets themselves up as a ‘Internet guru’ with ‘followers’ and an Institute requesting ‘Donations’ to ‘save the world’, serious warning bells should be going off your heads.

jimf said...

"Chimera Proxyment" wrote:

> Remember the way the field of Cybernetics mysteriously ‘changed’
> and then seemed to disappear limited to simply self-help books and
> prosthetics?

You mean Norbert Wiener started selling artificial. . . wieners?

I'm afraid I don't remember that.

> Do need to sign a form?

I dunno. I think **somebody** may need to sign one.

jimf said...

Marc Geddes wrote:

> When someone claims to know all the answers to moral questions
> and sets themselves up as a ‘Internet guru’ with ‘followers’ and
> an Institute requesting ‘Donations’ to ‘save the world’, serious
> warning bells should be going off your heads.

You'd think so, wouldn't you? But some people (a lot of people)
apparently find this sort of thing attractive. I most certainly
do **not** (though I admit I'm a bit in awe of people who
can take themselves that seriously). Like Anne Corwin, I
first approached the on-line >Hists as a social club, or a
literary salon. I was soon made to feel that that attitude
was "unserious" in the New Regime. (I wish that some of one
particular newcomer's 1996 posts to the Extropians' list were still
in the archive. They were full of contempt for list members -- all
the other list members -- who were all talk, talk, talk and had
no plans to actually get out there and **do** something.)

Some reasonably bright people have a knack for "claiming
to know all the answers" (whether or not the rest of the world
acknowledges that they do). The art of instantly leaping to conclusions
as practiced by someone who takes his own intelligence
very seriously indeed -- a salient characteristic
of Ayn Rand, as her erstwhile friends fondly
remembered -- had a technical name among Objectivists.
It was called "integration", and it was considered
a virtue. It was a sign of having worked so
hard on ones premises, and having worked out the
kinks in one's mind so thoroughly, that one didn't
have to hesitate a moment before passing judgment
on any new piece of information coming along.

In the early 70s, when I was in college for the first
time, I fell in for a year with a group of Baha'is who
proselytized on the campus (I did so because I'd fallen
in love with a student who was also a Baha'i.) During
the time I was associated with these people I met
a curious fellow who was like a Singularitarian activist,
only with regard to saving the world by spreading
the teachings of Baha'u'llah. He even used some of
the same terminology as the S-ians (and this was in
1973), saying that if all the Baha'is were doing their
part, the Faith would be spreading **exponentially**.
Now, nobody with any common sense, even the vast
majority of Baha'is, believes that the Baha'i Faith
is going to take over the world any time soon (and
indeed, it isn't any closer to doing so 35 years
later than it was in 1973). Most of these (reasonably
sensible) people saw their religion as a "moral/esthetic"
project, a private path to perfection as Dale would
describe it. And these people got rather irritated with
this guy going around telling them that they obviously
weren't doing things right because membership in the
Local Spiritual Assembly wasn't doubling every year,
or however often he thought it should be doubling.
In fact, he'd gotten into trouble some time before I'd
gotten to know any of these folks, and he'd been
temporarily excommunicated, or his credit card had
been taken away, or he'd stopped being invited to
Firesides, or whatever sanction the LSA had received the
authority (from the National Spiritual Assembly) to impose
on this guy. The fellow's "suspension" had been revoked,
and he'd been reinstated as a member-in-good-standing,
shortly before I met him, and everybody was trying
especially hard to be nice to him, but he was clearly
**still** taking the same line (which is why he had
to be "explained" to me in the first place -- Bahai's
are ordinarly extremely reluctant -- or at least are
**supposed** to be extremely reluctant -- to gossip
about each other).

The guy just had some kind of personality problem.

Anonymous said...

Really too bad about ol’ Norberts name.

A genius of that caliber and you would think he would come up with something a little less catching in the wrong way; and maybe just a tad more elegant/cool: like Cystrom. He could have gotten away with that, surely.

Norbert Cystrom: Genius-Mathematician and founder of a new branch of scientific inquiry called Cybernetics. You almost see the synthetic wiring of partly mutilated cats, prosthetic eyes for blind patients and labs where secret research on hypnosis take place. Norbert Cystrom ‘Superstar’.

Instead I just end up thinking about hotdogs and the soon-to-be wife thinking: ‘Gee, that will make for a nice name.’

Ha, ha, anyway…

What does morality have to do with the optimum choices we make; other than guarding us from the moral codes and resultant actions of others? For someone who does very real ‘good’ for all (beyond that simple-minded morality imposed on us by the in-group ‘enforcers’ established as ‘good-correct’ by any groups to which we belong) then you can see where any man engaged in superior action can very easily appear the villain and be thereafter be called ‘evil-crank’ or whatever. Like the early scientists.

With what you wrote about the spiritual-man of Baha'u'llah or whatever: He strives for perfection and claims it to be borne of some ‘higher-order’ of thinking. He claims that his followers failed to attain real ‘perfection’ simply because the number of new recruits lags behind the rate he assumed by extrapolation of some previous years rate of induction and then pulled this self-generated ‘projection’ out of thin air without any long-term measure as to the validity of its truth.

I did not know that the population of adherents had anything to do with quality. I mean just look at the public school system or any organized religion that uses a hard-line.

Cults fail to a large extent due to separation from the world and isolated dogmas – no doubt the followers lack critical-thinking, that is why they end up there. The leaders must carry the weight of thinking critically and frankly to govern any movement will require more than one man, unless it is a dictatorship. In any large successful business/government/underground-society the use of experts with regards to logistics, strategy, and PR all come into play.

This becomes just too much for one man.

Your Guru perhaps needed a ‘better agent’ to fulfill his claims.

When the culture of dogma becomes too closed minded within the leadership these ‘professionals’ or experts of specialized function often seem just too much and get ousted. Your guru probably needed an expert on social-engineering and a statistics to verify and secure his claim. If had done this, he could make the claim AND become even more respect in fulfillment of what he said would take place. Every movement will possess a well defined set of goals and explanation of the way they must go about carrying it out. Dogma merely results from crystallization of a core of beliefs to the exclusion of perception of the changes occurring in the environment. So don’t worry about dogma; it becomes self-limiting.

Not all spiritual paths possess heavy reliance on dogma, most do however explicate the perspective required for users of a system to perceive ‘deeper-truths’. Most religions and cults rely on dogma and heavy conditioning and true spiritual systems embedded within these organizations remain rare, yet can be found usually under the protection some sort of method of secrecy to protect them from the religions dogmatic system of enforcement.

I do not however think the Transhumanist movement becomes lame, dysfunctional or anything because it has a vision. It only becomes dysfunctional when the stress is on the vision rather than the here-and-now. Thus with very busy visionaries experts become an essential. The right PR experts could very well fuse the now to that future.

All movements use visionaries. All movements can crystallize into Dogma.
A Vision does not = dogma to the same degree for everyone.

Transhumanism vitality does not come so much from the technology being feasible, or even in existence. The vision coming together with the here and now can turn an apparent bubble of hype and eventual disappointment into an event with real assets.

Do not look at the bottom-line and donations and advertisement separated from current here and now advances. Instead systemize the identification of current and future lines of technology and apply a direct, material, and profitable working base and combine this with your Vision and PR. Shape the future and if the stones on the road is real the future will come to you.

Anonymous said...

I Mean:

When speaking of the road to the future. If the road is made with real stone, then the future will come to you.

Otherwise ain't no one going to find it.

Anonymous said...

To Marc Geddes:

Well, I would've been surprised if an undergraduate study in computing touched to any great extent on metaethics, which is fundamentally the subject of Yudkowsky's proposition in the quotation provided. Don't stick words in to the man's mouth; he doesn't claim to know answers to moral questions, in fact he specifically disclaims such private definitions, nor has he set himself up as an 'internet guru', but I rather suspect that you're just projecting your own perception of him on to your perception of his motivations in this case.

What he is actually saying in the quotation you provided is that non-falsifiable a priori premises are not good bases for logical reasoning. The (natural) universe as we would expect it to look if it lacked all elemental metaethical or moral realities is exactly the same as the (natural) universe would be expected to look if it did not lack these things. As such, it hardly matters whether or not there is a fundamental reality to ethics and morality or if they are just human constructions (since we disagree on nearly all of them anyway) - what Yudkowsky is saying is that the basis for ethical discourse, fundamentally, is subjective. Whatever we may claim about the ethical realities of the universe or of ourselves, WE DO claim.

This also touches on one of the things I found interesting that Dale mentioned early on: He wanted a good definition of "Friendliness" and "Intelligence." I do not consider myself a Singularitarian - far from it - but I have been interested in Yudkowsky and the SIAI particularly for this reason - they don't claim to have any such definitions on hand. In fact, the organisation's stated goal for the foreseeable future is to attempt to develop such definitions.

As any ethicist knows, it is patently easy to say what we do or don't want (at least, it is prima facie easy), and yet a genuinely theoretically robust definition of "Friendliness" eludes us. So with "Intelligence:" We seem to 'know' when there's 'intelligence' in something, or we think we do, but we lack a coherent understanding of what this even entails or fundamentally means. This would seem, at least on the surface, to be a product of the fact that we have been evolved to be excellent intuitive psychologists, but of course, intuition clearly has its limitations ("Friendliness" and "Intelligence" go to show).

In my opinion, the primary.... utility that the SIAI is likely to service at all is in contributions to a future "Friendliness" theory, since it strikes me that artificial intelligence, general or otherwise, is the ideal field in which to pursue a study of ethics and metaethics (along with neuropsychology), due to the fact that apparently only 'intelligent' things seem to have ethically relevant interests. Although we can't agree on much, ethically speaking (we can't agree on right or wrong, good or evil, or even and to what extent a class or classes of things allow us to define or comprehend the degree to which any of these things are appropriately described in their aspects in ethical terms), we seem largely to be able to agree that certain things are clearly (read: intuitively) NOT appropriately described in ethical terms, and a very significant average characteristic of those things is that they are not what we would impulsively define as intelligence bearing systems. If it is true that metaethical realities are fundamentally non-falsifiable, then it is also true that any study of "Intelligence" is clearly the ideal planting ground for the seeds of any coherent study of "Friendliness" (ethics).

I'm personally very sceptical about the feasibility and especially about the desirability of some sort of "Seed AI" as the SIAI would describe it, but if they have something useful to say on the subject of ethics and intelligence (and they are at least 'receptive towards having something useful to say' because they do not a priori claim to know what should be said), then I think we should at least be ready to listen in for that (if only that alone).