Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Sunday, January 27, 2013

More Wrong

Upgraded and adapted from the Moot to the prior post, "JimF" commented:
The rigid, analytical math-oriented bias of that approach to AI [that is, the approach of the singularitarian/Less Wrong/Bayesian triumphalist Robot Cult sect over which guru-wannabe Eliezer Yudkowsky presides] 1) harks back to the GOFAI of the 50s and 60s, when some folks expected the whole thing to be soluble by a smart grad student spending a summer on it 2) reinforces Yudkowsky's own dear image of himself as a consummate mathematician 3) is congruent with the kind of Ayn Randian, libertopian bias among so many of the SF-fan, >Hist crowd.
I think there are enormously clarifying observations packed into that paragraph, and folks really should re-read it.

Speaking of the way such singularitarians and their singularipope hark back to the most failed, most inept, most sociopathic, most boyz-n-toys AI discourse of mid-century Gernsbackian-pulp post-WW2 U!S!A! footurism, I can't help but cite another passage from "Less Wrong" that JimF drew to my attention in a private e-mail a couple of days ago. In it "Stuart_Armstrong" declares:

I've just been through the proposal for the Dartmouth AI conference of 1956, and it's a surprising read. All I really knew about it was its absurd optimism, as typified by the quote:
An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
But then I read the rest of the document, and was... impressed. Go ahead and read it, and give me your thoughts. Given what was known in 1955, they were grappling with the right issues, and seemed to be making progress in the right directions and have plans and models for how to progress further. Seeing the phenomenally smart people who were behind this (McCarthy, Minsky, Rochester, Shannon), and given the impressive progress that computers had been making in what seemed very hard areas of cognition (remember that this was before we discovered Moravec's paradox)... I have to say that had I read this back in 1955, I think the rational belief would have been [emphasis added] 'AI is probably imminent'. Some overconfidence, no doubt, but no good reason to expect these prominent thinkers to be so spectacularly wrong on something they were experts in.


Although our so-wrong less-wrongist Robot Cultist cannot help but point to the "overconfidence" of these sentiments -- given the actual, factual reality of their complete flabbergasting serial failedness and wrongness and ridiculousness -- you can tell his heart just isn't in all that.

Where sensible people look at these pronouncements and see the radically impoverished conception of intelligence and ridiculously triumphalist conception of technoscience driving the discourse, the Robot Cultist finds himself saying... Dag, those dumb sociopathic white guys were really onto something there! Man, were they rational and right or what to believe so irrationally in what was so wrong! Gosh, I sure love those guys! Notice that even the retroactive assessment of the Bayesian triumphalist cannot let the, you know, reality of how "spectacularly wrong" they all were to provide a "good reason" getting in the way of the still-unqualitifed still-energetic assertion that this army of fail was filled to the brim with "prominent thinkers" and "experts" in sound AI.

About Jim's glancing reference to the Randroidal pot-boiler & pulp SF associations of this Bayes/AI-fandom I'll add my own glancing references, noting first that the entitative figuration of their AI discourse remains far more beholden to sfnal conceits than software practice, and also pausing momentarily to observe how curiously often sooper-genius Yudkowsky's highest profile formulations have seemed to depend on frankly facile, rather ungainly, high-school English level appropriations from popular fiction like Flowers for Algernon or Harry Potter. No doubt a paradigm-shattering "metaethical" treatise riffing on I Am the Cheese is soon forthcoming.

8 comments:

joe said...

I have a question. Has anyone actually seen any coding Yudkowsky has done?
He keeps talking about what an uber coder he is but I have not seen anything or talked to any one who has seen his work, or at least any high end "this guy is a genius" type stuff..

Has he shown it to anyone over on LW?

Dale Carrico said...

As I never tire of pointing out, futurology is a discourse -- it produces rhetoric for ideological and subcultural-signalling purposes not actual scientific or policy results. That's why a trained rhetorician is a reasonably good candidate to criticize futurologists, and also why rhetorical analyses provoke howls among futurologists demanding to be assessed only on the "technical merits" -- but always only on their idiosyncratic terms. So, of course, you're right -- the singularipope can't pass muster as an actual coder or scientist (none of the futurologists do, and even those very few who do wear real science or technician hats at least some times in their lives aren't doing their science when they shift to their futurological fulminating, as witness Kurzweil), but whatever his PR to the contrary, it seems to me that what is interesting, wrong, and substantially productive (in that counterproductive way if his) about his futurological practice has never actually been about any of that anyway.

jimf said...

> I have a question. Has anyone actually seen any coding
> Yudkowsky has done?

He claimed once upon a time to have written (or started work
on) a text editor for an early (Mac 128K, OS 5 era) Macintosh,
using the CodeWarrior development environment, when he was
a wee tyke.

He launched a SourceForge project for a
programming language he designed, that he called "Flare".

I believe he claimed to have directly coded in HTML his
early Web articles -- either that, or he produced them
(or produced later versions) using an HTML editor he
wrote (in Python? He was big on Python once upon a time.)

But no, AFAIK he's never produced substantive code --
certainly no code for an AI.

If anybody knows better than that, we're all ears.

Dale Carrico said...

Sheesh, I hand-htmled my first web pages in 93 or 94 and I'm an effete aesthete pomo relativistic luddite sheeple mehum.

jimf said...

> No doubt a paradigm-shattering "metaethical" treatise riffing
> on I Am the Cheese is soon forthcoming.

Thank God he hasn't attempted Tolkien. I don't think I'd
survive. ;-> (Though he did once, long ago, ask me to
translate something into Quenya.)

jollyspaniard said...

I wrote a nine room dungeons and dragons game back in 81, where's my cult!

jimf said...

> I wrote a nine room dungeons and dragons game back in 81,
> where's my cult!

http://lesswrong.com/user/Dmytry/overview/?count=20&after=t1_6c8z
[Dmytry Lavrov]
-----------------
"I don't like when someone picks up untestable hypotheses out
of scifi. That is a very bad habit. Especially for Bayesians."

"The issue is that it is a doomsday cult if one is to expect
extreme outlier (on doom belief) who had never done anything
notable beyond being a popular blogger, to be the best person to
listen to. That is incredibly unlikely situation for a genuine risk.
Bonus cultism points for knowing Bayesian inference but not
applying it here. Regardless of how real is the AI risk. Regardless
of how truly qualified that one outlier may be. It is an
incredibly unlikely world-state where the AI risk would be
best coming from someone like that. No matter how fucked up
is the scientific review process, it is incredibly unlikely
that world's best AI talk is someone's first notable contribution."

> Less Wrong has discussed the meme of "SIAI agrees on ideas that
> most people don't take seriously? They must be a cult!"

"Awesome, it has discussed this particular 'meme', to prevalence of
viral transmission of which your words seem to imply it attributes its
identification as cult. Has it, however, discussed good Bayesian
reasoning and understood the impact of a statistical fact that even
when there is a genuine risk (if there is such risk), it is incredibly
unlikely that the person most worth listening to will be lacking
both academic credentials and any evidence of rounded knowledge,
and also be an extreme outlier on degree of belief? There's also
the NPD diagnostic criteria to consider [Whoa! Ouch!]. The probabilities
multiply here into an incredibly low probability of extreme on
many parameters relevant to cult identification, for a non-cult.
(For cults, they don't multiply up because there is common cause.)"


Well, you can fool all of the people some of the time,
and you can fool some of the people all of the time,
but, if nothing else, Google makes it harder these days
to fool all of the people all of the time.

http://amormundi.blogspot.com/2007/10/superlative-summary.html
--------------
Utilitarian said...

Yes, the improbable attribution of the conjunction of ultra-extreme
ability, altruism, and debiasing success when these are quite imperfectly
correlated is among [a guru's] most suspect claims.


;->

jollyspaniard said...

Cheers for the link, interesting reading.