Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Sunday, October 12, 2014

Fluffing Yud

In the Moot long-time friend of blog "JimF" cites "Meta-Med Senior Health Research and Medical Associate" Dr. Scott Siskind, a.k.a "Scott S. Alexander" (he's quoting people quoting people quoting people so I might have the attribution wrong). Meta-Med, you may recall, is a plutocratic/"personalized" eugenicist/"enhancement" medical/"kindasorta" outfit with lots of robocultic muckety mucks behind it, and I drew some attention to it and to the transhumanists/singularitarians connected to it in the post to which the Moot I'm upgrading here is appended. Anyway, it turns out that in the recent best-selling and very widely reviewed book Superintelligence by transhumanist singularitarian Robot Cultist Nick Bostrom, "Eliezer [Yudkowsky] gets cited just about every other page, and in MIRI HQ there is a two-way videoscreen link from them to Nick Bostrom’s office in Oxford because they coordinate so much." This commenter then goes on to say that, "Searching the book’s bibliography [there are many] citations of MIRI people." MIRI is an acronym for the Machine Intelligence Research Institute which used to be the Singularity Institute for Artificial Intelligence (long associated with Yudkowsky fanboys), but changed its name as the burgeoning recent corporate-military mainstreaming of Singularity ideology via Singularity University (so-called), the Singularity Summit, and various Google projects to code a Robot God and end death have started to attract more serious attention and cash. The commenter does note an interesting traffic between MIRI folks and people being hired by Google in these projects, and so the name change may be as much or more a matter of creating confusions about uncomfortable associations than the official story of eliminating confusions about inaccurate associations. Be that as it may, MIRI in its present incarnation overlaps with Meta-Med in many of its funders and advisors, as full-on fulminating techno-transcendentalism still remains confined to a fairly cramped sub(cult)ure. Specific names that come up in the comment from the Moot about robocultic citations in Bostrom's bibliography are "Stuart Armstrong, Kaj Sotala, Paul Christiano, Wei Dai, Peter de Blanc, Nick Hay, Jeff Kaufman, Roko Mijic, Luke Muehlhauser, Carl Shulman, Michael Vassar, and nine different Eliezer publications." (Spelunk the Moot, by the way, to find half of these folks cutting their teeth the long decade through, some of them saying the most hilarious things imaginable.) To all this I added the following ruminations:
I seem to recall that Yudkowsky first claimed he didn't need to get a degree in any of the fields on which he still illiterately pontificates because the singularity was supposedly so near it would be a waste of time. Of course nowadays Robot Cultists like Bostrom who managed at least to do the work to get into the academy are so busy enabling Yudkowsky as a member of the fellow-faithful, getting him publications and citations and speaking gigs he could scarcely manage for himself with his online catalog of facile pseudo-philosophical fanfic about Harry Potter and Flowers for Algernon, that it remains a waste of time for Yudkowsky to set aside his guru-gig and actually see if his marginal convictions would long survive unqualified were he to go through the long slog of engaging with people who actually know what they're talking about. I actually do not doubt Yudkowsky is intelligent enough to benefit from an actual degree program and long-term engagement with the demands of real research. The path he is on does damage to the world -- deranging the terms of public deliberation and providing rationales for elite-incumbent interests on urgent technoscience questions -- but also to himself as far as I'm concerned. A caveat though: philosophy departments will obviously let anybody through (hell, even me!), and Yudkowsky would not be helped in the least by a degree in philosophy which he then treated as an endorsement of his skills in computer science or his knowledge of physics. It is yet another sign of the extreme and at this point multi-generational decline into crisis of Anglo-American analytic philosophy that it can no longer insulate itself from futurology being done in its name and under its auspices.

12 comments:

jimf said...

> MIRI in its present incarnation overlaps with Meta-Med in many of its
> funders and advisors, as full-on fulminating techno-transcendentalism still
> remains confined to a fairly cramped sub(cult)ure. Specific names that
> come up in the comment from the Moot about robocultic citations in
> Bostrom's bibliography are "Stuart Armstrong, Kaj Sotala, Paul Christiano,
> Wei Dai, Peter de Blanc, Nick Hay, Jeff Kaufman, Roko Mijic, Luke Muehlhauser,
> Carl Shulman, Michael Vassar, and nine different Eliezer publications."

Daily dumb. . . who, me?

http://www.scaruffi.com/singular/sin25.html
----------------
[Excerpts from] Demystifying Machine Intelligence:
Why the Singularity is not Coming any Time Soon And Other Meditations
on the Post-Human Condition and the Future of Intelligence
by Piero Scaruffi

The Audience of the Singularity

I organize many events in the San Francisco Bay Area. I am always
frustrated that so few young people show up. I routinely attend technical
and scientific talks at prestigious organizations like Stanford University,
the Computer History Museum and Xerox PARC. They are free and frequently
feature top-notch speakers. At least half of the audience is consistently
made of grey-haired people. . . [I]t is mostly older inactive engineers
who hear the distinguished researchers talk about the state of high technology. . .
The reason why younger people don't come in proportional numbers to educational
events is simple: they are busy at work or studying. . . [or they're]
fed up after so many years of college and just want to party in the evening.
Younger people are therefore more likely to get their technology news from
attending yearly conferences and trade shows and (on a daily basis) from
reading popular bloggers. What they get is, in other words, press releases.
(Don't even try to convince me that your favorite tech blogger is competent
and reliable: he is just a flywheel in a highly efficient system to distribute
press releases by high-tech companies, and mostly product announcements,
with little or no knowledge of the science behind technology and little or
no contacts in the labs that produced that science before some startup turned it
into a popular gadget). Therefore young technology buffs are more likely to welcome
enthusiastically the news that some startup has introduced a new. . .
[gadget when] the startup that made that announcement is simply
looking for funds from venture capitalists and needs to create buzz
around its business plan. . . They are also the ones who tend to believe
that Artificial Intelligence has built incredibly smart machines and that
the Singularity is coming soon.

That is half of the audience that absorbs enthusiastically any news about machine
intelligence. The other half is the one that I compared to religiously devout
people who simply have an optimistic view of all these press releases. . .
====

Help me, io9, you're my only hope!

jimf said...

> I seem to recall that Yudkowsky first claimed he didn't need
> to get a degree in any of the fields on which he still illiterately
> pontificates because the singularity was supposedly so near it
> would be a waste of time.

http://ieet.org/index.php/IEET/more/scaruffi20141012
-------------------
[A]n age that is rapidly losing faith in the traditional God
desperately needs to find and found a new religion, and the
Singularity is the best option that some people have in the
21st century. The human mind is programmed to believe in the
supernatural. That is one of the limitations of the human mind
and all this talk about the Singularity is nothing but a
new modern proof of that limitation.
====

http://www.scaruffi.com/singular/sin49.html
-------------------
A Conclusion: Sociology Again

Humans have been expecting a supernatural event of some
kind or another since prehistory. Millions of people are
still convinced that Jesus will be coming back soon, and millions
believe that the Mahdi will too. The Singularity risks becoming
the new religion for the largely atheistic crowd of the
high-tech world. Just like with Christianity and Islam,
the eschatological issue/mission then becomes how to save
oneself from damnation when the Singularity comes, balanced
by the faith in some kind of resurrection. We've seen this
movie before, haven't we?
====

Several movies, in fact. ;->

http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html
-------------------
Stephen Hawking: 'Transcendence looks at the implications of
artificial intelligence - but are we taking AI seriously enough?'

With the Hollywood blockbuster Transcendence playing
in cinemas, with Johnny Depp and Morgan Freeman showcasing
clashing visions for the future of humanity, it's tempting
to dismiss the notion of highly intelligent machines
as mere science fiction. But this would be a mistake,
and potentially our worst mistake in history.
====

jimf said...

> > We've seen this movie before, haven't we?
>
> Several movies, in fact. ;->

And stay tuned for the next one(s):

http://en.wikipedia.org/wiki/Terminator:_Genisys
et seq.

Fasten your seat belts!

http://www.businessinsider.com/louis-del-monte-interview-on-the-singularity-2014-7
----------------
"Today there's no legislation regarding how much intelligence a
machine can have, how interconnected it can be. If that continues,
look at the exponential trend. We will reach the singularity in
the timeframe most experts predict. From that point on you're
going to see that the top species will no longer be humans,
but machines."

These are the words of Louis Del Monte, physicist, entrepreneur,
and author of "The Artificial Intelligence Revolution." Del Monte
spoke to us over the phone about his thoughts surrounding artificial
intelligence and the singularity, an indeterminate point in the
future when machine intelligence will outmatch not only your own
intelligence, but the world's combined human intelligence too.

The average estimate for when this will happen is 2040, though
Del Monte says it might be as late as 2045. Either way, it's a
timeframe of within three decades.
====

Dale Carrico said...

look at the exponential trend

That's always an invitation to crazytown.

Dale Carrico said...

I said in this post, "I seem to recall that Yudkowsky first claimed he didn't need to get a degree in any of the fields on which he still illiterately pontificates because the singularity was supposedly so near it would be a waste of time." As someone who has been reading and critiquing and sparring with transhumanists and singularitarians for nearly the whole online lifespan of these movements I meant what I said in making that observation quite literally. That is, I personally do recall this reasoning being made and justified by Yudkowsky quite early on, long before most people took notice of him at all. Does anybody have a citation trail that would nudge that observation out of its present anecdotal recollection status? I ask because the observation has been called out as a falsehood elsewhere. If I am misremembering this I am happy to be corrected and will say so. But like it or not, I must say I don't think I am misremembering this at all.

jimf said...

Something like that rationale (not the imminence of the Singularity per se,
but that of "the danger of unfriendly AI. . . [being] so near --
as early as tomorrow") was attributed to Yudkowsky by Declan McCullagh
in that infamous 2001 _Wired_ article.

http://archive.wired.com/science/discoveries/news/2001/04/43080
----------
Making HAL Your Pal
Declan McCullagh
04.19.01

. . .

"I've devoted my life to this," says Yudkowsky, a self-proclaimed "genius"
who lives in Atlanta and opted out of attending high school and college.

It's not for lack of smarts. He's a skilled, if verbose, writer and an
avid science-fiction reader who reports he scored a perfect 1600 on his SATs.

Yudkowsky's reason for shunning formal education is that he believes
the danger of unfriendly AI to be so near -- as early as tomorrow --
that there was no time for a traditional adolescence. "If you take the
Singularity seriously, you tend to live out your life on a shorter
time scale," he said.
====

Dale Carrico said...

Thanks, Jim -- I can always count on you!

ZARZUELAZEN said...

The thing about Yudkowsky, he claims to be a 'rationalist', but in fact he's a religious fundamentalist, no different at all from any other run-of-the-mill religious fanatics.

The fundamentalist sees the world in absolute black-and-white terms, based on elaborate intellectual castles-in-the-air (usually having little or not relation to practical reality), and no other perspectives can be tolerated. To the fundamentalist, anyone disagreeing with them is 'mad' or 'stupid' and must be converted or destroyed at all costs.

Yudkowsky has spent the past 14 years on the internet bombarding gullible nerds with techno-babble in order to get them to send him money.

Consider the really weird, extremist ideologies he's pushed over the years (Libertarianism, Utilitarianism, Bayesianism).

Black-and-white cartoon moral theories (Utilitarianism), black-and-white cartoon economic ideas (Libertarianism), black-and-white cartoon philosophies of science (Bayesianism), see the pattern here? - it's clear Yudkowsky is a closet religious fundamentalist.

As to the idea that 'super-intelligence' is some sort of magic god , well this is just silliness. There's lots of things super-intelligence couldn't do - it can't break the laws of physics for example, and it can't crack unbreakable encryption codes - it can't over-turn logic and mathematics. Quite apart from these clear abstract limits, it would also be limited by many real-world practical constraints (resources, time, human nature, politics etc.) just like you or I.

And of course Dale, you're right that the very term 'super-intelligence' itself has not even been defined properly, let alone have we seen any actual real-world advances towards it on the near-horizon.

What would a real super-intelligence be like? I'm pretty sure that blindfolded monkeys throwing darts at a board with random wild speculations written on it would be just as accurate as the techno-babble that's spewed endlessly from Yudkowsky's mouth over the last 14 years.

jimf said...

> [H]e claims to be a 'rationalist', but in fact he's a religious fundamentalist. . .
>
> [He] has spent the past 14 years on the internet bombarding gullible
> nerds with techno-babble in order to get them to send him money. . .
>
> Black-and-white cartoon moral theories (Utilitarianism),
> black-and-white cartoon economic ideas (Libertarianism),
> black-and-white cartoon philosophies of science (Bayesianism). . .

http://www.motherjones.com/politics/2014/08/inquiring-minds-arie-kruglanski-psychology-extremism-isis
----------------------
According to University of Maryland psychologist. . . Arie Kruglanski,
who has studied scores of militant extremists. . . [the
attraction of an extreme ideology is n]ot just its content,
but the mindset that it indicates -- one that sees the
world in sharp definition, no shades of gray.
"These extreme ideologies have a twofold type of appeal," explains
Kruglanski on the latest Inquiring Minds podcast. "First of all,
they are very coherent, black and white, right or wrong.
Secondly, they afford the possibility of becoming very unique,
and part of a larger whole." . . .

That kind of belief system, explains Kruglanski, is highly attractive
to young people who lack a clear sense of self-identity, and are
craving a sense of larger significance. . .

These young people seem to have what psychologists call a very
strong "need for cognitive closure," a disposition that leads to
an overwhelming desire for certainty, order, and structure in one's
life to relieve the sensation of gnawing -- often existential --
doubt and uncertainty. According to Kruglanski, this need is
something everyone can experience from time to time. We all
sometimes get stressed out by uncertainty, and want answers.
We all feel that way in moments, in particular situations, but
what Kruglanski shows is that some of us feel that way more
strongly, or maybe even **all the time**. And if you go through
the world needing closure, it predisposes you to seek out the
ideologies and belief systems that most provide it.

Fundamentalist religions are among the leading candidates. Followers. . .
"know exactly what is right and what is wrong, how to behave in
every situation," explains Kruglanski. "It's very normative and constraining,
and a person who is a bit uncertain, has the need for closure, would be
very attracted to an ideology of that kind." And for an outsider. . .
drawn to that sense of certainty that it imparts, Kruglanski adds,
you then want to prove yourself. To show your total devotion and
commitment to the cause. . .
====

jimf said...

> [He] has spent the past 14 years on the internet bombarding gullible
> nerds with techno-babble. . .

Like guy I used to read on an Enneagram (of Personality)
forum more than a decade ago. (He was fun to read back then, but he's also
an Ayn Rand fan, and there was that unfortunate episode with his
roommate's cat.)

https://plus.google.com/+TimCaswell/posts/QJcpJei9Muy#+TimCaswell/posts/QJcpJei9Muy
------------------------
Isaac Schlueter

Aug 9, 2012

There are many great materials on decision theory, cognitive distortions,
irrationality, meditation, motivation, and influence. I think anyone
with a brain ought to study the literature on the subject, for the
same reason that I think drivers ought to get a license first. This
series from Eliezer Yudkowsky is about the best introduction/overview
to the art and science of self-updating that I've ever seen, and
the bibliographies on each article often have very useful links:
http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind

It is no small task. It is easier to climb a mountain, or get a
black belt in a martial art, or learn to fly a plane, or write an
operating system, than it is to change even a single idea in your head.
It takes practice, work, dedication, constant vigilance, and unshakable
persistence in the face of repeated failure.

Honest rationalism requires a tremendous faith, though certainly not
in any sort of "higher power". It requires a faith in yourself and your
ability to keep going. This belief in oneself certainly must come in
advance of evidence, because the evidence is clear: we are all weak,
mostly wrong most of the time, and sudden enlightenment is a fairy tale.
But do we stop writing software just because there will be bugs?
Of course not. It is only through incremental self-updating that we
can improve, and that can only be done if you believe you can do it.
It's much easier to just give up, and unlike climbing a mountain,
you won't even notice that you have given up.

You can never stop exercising, or the muscles turn to flab, and you become
just another insane person driving their brain without a license.
I don't think it's possible to keep to The Way unless you have a goal
other than the practice itself; the "something to protect" that
gives the superhero a reason to focus their power. It requires a
connection to the world and the people in it; the courage to face
overwhelming odds; the humility to admit failure and swallow pride
and always rise above our petty grievances and insecurities; the
audacity to say, "I am fit to live in this world. I will stare reality
in the face, and never flinch away from it." The rewards are significant.

This is why I laugh when a religious person tells me that I lack faith
or spirituality. They usually don't know the meaning of those words.

Tsuyoku naritai!
====

Enchanté to you too, bub.

jimf said...

And I say again unto you. . .

> https://plus.google.com/+TimCaswell/posts/QJcpJei9Muy#+TimCaswell/posts/QJcpJei9Muy
> ------------------------
> Isaac Schlueter
>
> . . .
>
> It is easier to climb a mountain, or get a black belt
> in a martial art, or learn to fly a plane, or write an
> operating system, than it is to change even a single idea
> in your head. . .

Similarly, "It is easier for a camel to go through the eye
of a needle, than for a rich man to enter into the kingdom of God. . ."

-- Mark 10:25

and also

"It is easier for a camel to pass through the eye of a needle if it is
lightly greased."

-- Kehlog Albran, "The Profit"

;->

jimf said...

> "It is easier for a camel to pass through the eye of a needle if it is
> lightly greased."
>
> -- Kehlog Albran, "The Profit"

Lightly greasy:

http://elfs.livejournal.com/1197817.html
------------------
Elf M. Sternberg
January 22nd, 2010

[W]e, human beings, have **purpose** of some kind. We fight like
hell to fulfill it, whatever it is, and we're good at the
consequential purpose of reproducing to cover the planet
like mad. But that purpose is arbitrary, emergent because
that's the way evolution works. **All** our purposes are arbitrary
and emergent: barring a theological excuse, we're making
it up as we go along, picking and choosing the ones that
appeal to us. . .

In order for there to be a universe in which we limited,
organic human beings have a place beside our superhuman progeny,
the superhuman progeny must actively want (that's the emotion
they need, to decide for our survival) us to be around. We
must be, quite literally, a subject of their, for lack of
a better term, a posthuman term, religion. . .

AIs won't emerge through the standard evolutionary model and
will not have the exaptive outcomes of evolutionary ecology.
They will emerge due to our desires. We will oversee the process.
We have a chance to get it right.

There is, however, only exactly one chance. Between rampancy and
failure, we must pass through the eye of the needle and create AIs
that **like** and **want** us, no rationale needed, and if questions
are asked, the AIs must be satisfied, as we are satisfied, that
in an arbitrary and uncaring universe, they want to keep surviving. . .
and they want us to keep surviving right along with them.
[Eliezer] Yudkowsky is working harder and smarter on giving humanity
that chance than any other thinker on the issue of AI sentience.
He should be given his due.
====

;->