Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Sunday, October 05, 2014

Very Serious Robocalyptics

(Spoiler warning for tl;dr space cadets: read through to the penultimate paragraph to arrive at some serious anti-futurological muckraking.) A piece in Salon today, Our Science-Fiction Apocalypse: Meet the Scientists Trying to Predict the End of the World, will no doubt introduce the transhumanist futurist Nick Bostrom to many Americans who have not heard of him before. Although the piece is larded with qualifications and mild snark (the "science fiction" in the title should give some pause to the "meet the scientists" in the subtitle, for example), it is far from a critical expose and if anything is offered up in the spirit of a contrarian think-piece providing futurological food for thought -- and hence gives Bostrom and his fellow faith-based techno-transcendentalists far more benefit of the doubt than they deserve or we can safely stand for.

Nick Bostrom is described in the piece as a scholar concerned with "existential risks" and as the head of Oxford's Future of Humanity Institute. The reference to Oxford confers an immediate gloss of legitimacy on Bostrom's brand of futurism (entirely as it was meant to do), and it is important to note that Bostrom is not also described in the piece as a transhumanist, and the founder of the World Transhumanist Association and writer of the FAQ that continues to define that techno-transcendental "movement" for many of its members. That the World Transhumanist Association subsequently repackaged itself as a slick pop-tech site called Humanity-plus (non-member humans by comparison being of course merely humanity-minus), and that Bostrom went on to found the more reputably monikered stealth-transhumanist Institute for Ethics and Emerging Technologies provides what seems to me to be indispensable background restraining any acceptance out of hand of the scholarly legitimacy of Oxford's Future of Humanity Institute as well, especially once one discovers how many familiar faces from Bostrom's more robocultic earlier outings throng the ranks of his august corporate-sponsored Oxford-inflected effort.

A couple of years ago I wrote a piece entitled Insecurity Theater: How Futurological Existential-Risk Discourse Deranges Serious Technodevelopmental Deliberation, in which I proposed that the sort of existential-risk analysis Bostrom is promoting in the Salon piece represents "the other side of the counterfeit coin of expertise provided by [the] hyperbolic promotional/ self-promotional pseudo-discipline" of futurology. I usually critique superlative futurism/futurology of the kind associated with transhumanism, techno-immortalism, singularitarianism, digital-utopianism, nano-cornucopism, and so on as essentially faith-based initiatives, hyperbolizing consensus science and legible policy concerns into promises of techno-transcendence modeled on the omni-predicates of theological godhood, omniscience, omnipotence, omnibenevolence.

Existential risk discourse is the apocalyptic obverse to these futurological raptures, at once diverting attention from threatening criticisms of the belief system itself while also providing still more diversions from real science, actual harm-reduction policy-making, substantial problem-solving, serious deliberation over actually-existing dangers, risks, costs, inequities, crimes. In that earlier piece I warned: Any second an actually accountable health and safety administrator is distracted from actually existing problems by futurological hyperbole is a second stolen from the public good and the public health. Any public awareness of shared concerns or public understanding of preparedness for actually existing risks and mutual aid skewed and deranged by futurological fancies is a lost chance of survival and help for real people in the real world. In a world where indispensable public services are forced to function on shoestring budgets after a generation of market fundamentalist downsizing and looting and neglect, it seems to me that there are no extra seconds or extra dollars to waste on the fancies of Very Serious Futurologists in suits playing at being policy wonks. To put the point more concisely, existential-risk discourse seems to me an existential risk.

Am I over-reacting? Am I indulging yet again in "hate-speech" against the innocent futurist subcultures? Am I engaging -- as the robocultically-inclined charge again and again -- in name-calling without any substance to back my pessimistic and relativistic and nihilistic and "deathist" assertions? Am I being unfair to serious futurologists with important concerns about the public good? Judge for yourself. From the Salon piece itself:
“Global warming is very unlikely to produce an existential catastrophe,” Nick Bostrom, head of the Future of Humanity Institute at Oxford, told me when I met him in Boston last month. “The Stern report says it could hack away 20 percent of global GDP. That’s, like, how rich the world was 10 or 15 years ago. It would be lost in the noise if you look in the long term.” ... But, Bostrom believes, even the misery caused by this kind of decline pales in comparison to what could be inflicted by high-tech nightmares: bioengineered pandemic, nanotechnology gone haywire, even super-intelligent AI run amok.
While the author of the piece may seem to want to insert some sanity at this point by declaring (the obvious) that "[t]hese [are] exotic and unlikely-sounding disasters" he immediately confounds that expectation by completing the sentence with the observation that "these possibilities that are finally getting some attention." Whew! Finally! Won't somebody please think of the Robocalypse! Enough of this shilly-shallying about silly atmospheric carbon and human trafficking and arms proliferation 'n stuff!

The piece goes on to describe a burgeoning institutionalization of this sort of discourse with serious corporate-military dollars behind it and serious academic heft lending it prestige (it is not irrelevant to the latter that the neoliberal corporatization of the academy renders once-prestigious places of scholarship much more ready to eat their legacies for cash on the barrelhead):
In the last few years, a number of institutes have sprung up to begin to do serious research on the risks of emerging technology, some of them attached to the world’s most prestigious universities and stocked with famous experts. In addition to FHI, there is the Center for the Study of Existential Risk at Cambridge and the Future of Life Institute at MIT, along with the Lifeboat Foundation, the Foresight Institute and several others. After years of neglect, the first serious efforts to prevent techno-apocalypse may be underway.
It is unfortunate that the Salon piece provided no links to the organizations listed here as undertaking these serious efforts. For example, The Lifeboat Foundation which is soliciting participants in, among other things, its "LifeShield Bunkers program [which] is a compliment to our Space Habitats program. It is a fallback position in case programs such as our BioShield and NanoShield fail globally or locally. A bunker can be quite large, such as Biosphere 2. A large bunker would be a place where babies are born and children play and go to school... Let us know if you wish to participate in a local LifeShield Bunker. We will contact you if and when we find a cluster of interested people in your area. Read The Case for Survival Colonies: Soliciting Colonists." Very serious! Or The Foresight Institute, devoted to Eric Drexler's promises of a super-abundance and near-immortalization "technology... based upon putting atoms where we want them rather than upon handling 'atoms in unruly herds[.]'" Again, very serious! For those who read my blog with any regularity you will find many familiar faces recurring in the advisory boards and recommended readings of these organizations (both the ones that are striving for mainstream respectability and the ones that are letting their futurological freak flags fly -- the talent pool for this brand of moonshine isn't exactly capacious), just scroll down the names of the futurologists who have come in for analysis, and no small amount of ridicule, in this anti-futurological archive of mine, and you will find most of them there.

All that said, I will concede Salon's point, to a point. This business is indeed serious -- serious as a heart attack. The ramification of these institutional spaces for futurological flim-flammery is taking place across a terrain in which think-tanks have already displaced or deranged the role of the academy as a source of rigorous scholarly support and critique of public policy deliberation (and even those with absolutely valid critiques of the academy -- stratified as it is by sexism, white-racism, and plutocratic upward-fail -- can grant the force of this point). Indeed, rather than view the latest arrival of futurology to the pseudo-scholarly commandeering of and feasting on public deliberation as a particularly new phenomenon, I would insist instead on the role of a futurological/instrumental/computational logic especially congenial to the ends of corporate-military "competitiveness" -- a logic originating in no small part out of the host of speculative pseudo-disciplines connected to the inequitable distributions of costs, risks, and benefits of market futures -- in the original and abiding corporate-military think-tankification of the public sphere in the first place. To these developments should be added the recent entrance of faith-based techno-transcendental aspirations like coding Robot-Gods or "solving death" into the budgets of (what now passes for) elite technology companies, like Google and Apple. It is no surprise that the celebrity CEOs of venture capitalism whose skim and scam operations have thrust some of them into lotto-luxe multi-billionaire precincts they rationalize through assertions of sooper-genius would find themselves attracted to infantile fantasies and facile pseudo-scientific formulations of faith-based futurisms -- but at some point throwing real money and directing real media spotlights at this nonsense threatens to have an effect in the real world.

The "Sixth" of my Ten Reasons to Take Robot Cultists Seriously goes right to the heart of this concern:
As Margaret Mead famously insisted, "Never underestimate the power of a small group of committed people to change the world." The example of the neoliberals of the Mont Pelerin Society reminds us that a small band of ideologues committed to discredited notions that happen to benefit and compliment the rich can sweep the world to the brink of ruin and the example of the neoconservatives reminds us that a small band of committed people can prevail even when they are peddling not only discredited but frankly ridiculous and ugly notions. Futurologists pretend that hypberbolic marketing projections are the same thing as serious technoscience policy deliberation, which is a gesture enormously familiar to the investor class and the technology sector's customary membership, and the futurologists inevitably cast rich entrepreneurs as the protagonists of history, which is a gesture enormously attractive to the skimmers and scammers and celebrity CEOs of the technology sector's essentially narcissistic culture. Although their various predictions are rarely more accurate than those of chimpanzees at typewriters, although their various transcendental glossy-mag editorials and tee-vee ready techno-rapture narratives are rarely more scientific in their actual substance than those of evangelical preachers, although their dog and pony show sounds almost exactly the same now as it did five years ago, ten years ago, fifteen years ago, twenty years ago, twenty-five years ago as they still drag out the same old tired litany (super-parental robot gods! genetic fountains of youth! cheap nanobotic superabundance! better than real immersive VR treasure caves! soul-uploading into shiny robot bodies!), and all with the same fervent True Belief, the same breathless insistence that this is all New! the same static repetition that change is accelerating up! up! up! it is not really surprising to discover that the various organizations associated with superlative futurology are attracting more and more money and support and attention from the rich narcissistic CEOs of the technology sector whose language they have been speaking and whose egos they have been stroking so assiduously for years and for whom they provide such convenient rationalizations for elite-incumbent rule. You better believe that, ridiculous and crazy though they may be, the Robot Cultists with well funded organizations (like the Future of Humanity Institute at Oxford, Global Business Network, Long Now Foundation, Institute for Ethics and Emerging Technology, Singularity Summit to name a few) to disseminate their pet wish-fulfillment fantasies and authoritarian rationalizations can do incredible damage in the real world.
It is easy to find yourself smugly shaking your head at the stark realities implied by the concession in the Salon piece that, "Some of these institutes have not even started to do research yet; they’re still raising funds." You don't say! Why, that's like declaring "some evangelicals haven't researched their fire and brimstone claims; they're too busy passing the collection plate"! But the funding these organizations are starting to attract from corporate sponsors and the derangement of the terms of public policy discourse introduced by the multiplication of techno-transcendental figures, frames, conceits, narratives are all too real whatever their palpable idiocy -- just look how eager people are to describe unintelligent artifacts as "smart" to the denigration of their own intelligence, just look how eager people are to describe as "disruptive" completely conventional right-wing deregulatory schemes, just look how regularly the public will describe the stasis of our unsustainable conformist consumer socioculture as a period of "accelerating growth," in each case a futurological reframing in the service of elite-incumbent interests and to the utter detriment of sense.

New to me from reading the piece, and I must say enormously interesting, was its observation that "[t]he field [of futurological existential risk discourse] has benefited from a well-informed patron, the Estonian entrepreneur and computer programmer Jaan Tallinn, co-founder of Skype... [who] told me that his concern with the future of humanity began in 2009, when a lawsuit between Skype and eBay left him temporarily sidelined. Finding himself with millions of dollars in the bank and no obligations, he spent his time reading through the web site Less Wrong, 'a community blog devoted to refining the art of human rationality.'" It is very interesting that any of this is supposed to qualify Mr. Tallinn as "well-informed" in some way. Does the article tell us that Tallinn spent this time earning a degree in some legible scientific field in a university or doing research in a laboratory setting or consulting with policymakers beholden to majorities or even coding usable software or building prototypes of workable devices? No, no, no, no, no. Instead we hear of a techbro sitting on a pile of lucky-lucre with time on his hands reading some guru wannabe's internet manifesto about the coming of the Robot God and thinking this puts the Keys of History in his hands. Very serious, well-informed. About Tallinn's singularitarian guru Eliezer Yudkowsky and his Less Wrong coterie you may find it enlightening to read A Robot God Apostle's Creed for the Less Wrong Set, or Deep Thoughts on Democracy from Eliezer Yudkowsky, or So Not A Cult. I did a little more digging and discovered soon enough that when Jaan Tallinn isn't giving his money to futurologists who would worry us about Robocalypse he is devoted to the work of "a medical-consulting firm" called MetaMed which he founded and which received an infusion of start-up cash from none other than market-libertopian and singularitarian Robot Cultist Peter Thiel.

I will leave it as an exercise for the reader to think through, by way of conclusion, the political implications of championing "personalized medicine" for the super-rich when universal single-payer basic healthcare hasn't arrived after more than a century of heartbreaking mass social struggle backed by a consensus of healthcare expertise, or to think what eugenic transhumanists may have in mind when they speak of providing client performance enhancement. No doubt the countless millions of people who die from treatable or neglected diseases because they cannot afford basic healthcare or because they live in over-exploited regions of the world without access to basic healthcare does not rise to the level of an "existential threat" that would attract the notice of our futurological faithful -- even if every single human being who dies for the lack of healthcare available to others is a human being who potentially could have contributed their measure of imagination and intelligence and effort to the solution of shared problems that really do imperil humanity as well as to the archive of creative expressivity that makes life worth living for us all.

18 comments:

Chad Lott said...

I wish some of these folks with money would read an online manifesto about how rad digging wells is for saving lives or getting bicycles to little girls in Africa is for preventing rape.

Dale Carrico said...

A New New Deal paying unemployed to plant trees and subsidize adding front porches to houses, you know? Not rocket science.

jimf said...

> Nick Bostrom is described in the piece as a scholar concerned
> with "existential risks" and as the head of Oxford's Future of
> Humanity Institute. . .
>
> Any second an actually accountable health and safety administrator
> is distracted from actually existing problems by futurological
> hyperbole is a second stolen from the public good and the public health.

You know, these are the guys who think it's likely that we're all
**already** living in a computer simulation -- that we're
all just virtual-reality software constructs floating around
in some Godlike civilization's Blue Genes (or in the
pockets of their blue jeans -- maybe we're just a wearable
novelty item -- carry around your own pocket universe!).
( http://www.simulation-argument.com/ ). You know, because
it's **obvious** (ain't it?) that **we're** on a trajectory
to have Jupiter-sized masses of "computronium" in a few decades (because
More Moore and acceleration of acceleration, dontcha know),
so if we can do it then the statistical odds that
**somebody** somewhere else in the universe has already done
it or is doing it right now are overwhelming, right? And because
we've all seen Star Trek TNG "Ship in a Bottle", or
read Daniel F. Galouye's _Simulacron-3_ or Greg Egan's
_Permutation City_, or seen The Matrix (a latecomer, but the one
that seems to have introduced the idea to a mass audience)
or Inception, or -- well gee, does TV Tropes have
an entry on this one? (**Of course** they do!
http://tvtropes.org/pmwiki/pmwiki.php/Main/RecursiveReality ).
Even "skeptic" Sam Harris seems to take this SF trope seriously --
or at least seriously enough (apparently he's a pal of Bostrom's)
to bring it up in a public debate about the afterlife in the presence
of none other than Christopher Hitchens (who managed to keep a
straight face throughout):
Christopher Hitchens and Sam Harris on is there an afterlife full debate
http://www.youtube.com/watch?v=mlCjy52h0hc
(18:45/1:37:50).

So what does this have to do with "existential risk"?
Well, think about it. If we and our surroundings are just
computer programs, then we'd all just disappear if the computer were
TURNED OFF! (or if the pocket universe went through the laundry by accident.)
So we'd better figure out what we can do to make sure we're
not TURNED OFF! (Be more entertaining, be more morally worthy,
or God knows what.) This line of thought is calculated to give
nightmares to the sorts of people who got the heebee-jeebees about
Roko's Basilisk ( http://kruel.co/#sthash.rCUjgbHI.dpbs ).

jimf said...

> It is easy to find yourself smugly shaking your head at the
> stark realities implied by the concession in the Salon piece that,
> "Some of these institutes have not even started to do research yet;
> they’re still raising funds." You don't say!

There seem to be two lines of response (coming from self-identified
"transhumanists" themselves) to the threat of "existential
risks" due to acceleration of accelerating "technologies" (grey goo
nano-disassemblers or machine superintelligence that will -- what?
Make the traffic lights and ATMs malfunction.).

One -- the "left wing" response that seems implicit in some of
the rhetoric that comes out of places like IEET (dare I mention
James Hughes?) is that government agencies and ethics "experts"
need to start drafting laws and regulations **right now** to
be ready for dangerous ultratech before it can get the jump
on us. (What are they doing about the self-parking cars, I
wonder. That's just the first step, dontcha know, on the
slippery slope to the Robopocalypse.) I suspect this line
(the Big Government approach ;-> ) is anathema to the majority
of self-identified transhumanists, but it is being pushed as
the "responsible" attitude by a high-visibility minority of them.
I think that the "responsibility" or desirability of attempting
to draft serious legislation to regulate science-fictional
technologies is questionable, to say the least.

The other line is that self-appointed "soopergeniuses" (as
Dale calls them) will show us the way, outside of established
regulatory mechanisms or even the customary peer-review vetting
process of the scientific community. Just send money, and we'll
take care of the rest!
( http://kruel.co/2013/01/04/should-you-trust-the-singularity-institute/#sthash.ZG4t8NXF.dpbs ).

A variant of the latter, in fictional form, seems to have
gotten a lot of public attention last year for journalist-turned-self-published
SF author Zoltan Istvan, who wrote something called _The Transhumanist
Wager_ that seems (I haven't read it) to be a transhumanist-themed
retread of Ayn Rand's _Atlas Shrugged_. All we need is $50 billion,
a soopergenius, and a suitable Galt's Gulch (or a seasteading enclave)
free from from government interference, and a nucleus of transhumanists
will achieve the immortality they need to take over the world
and spearhead the creation of humanity's evolutionary successors.
Or something. It apparently generated a lot of >Hist heavy breathing.

http://www.marinij.com/marinnews/ci_24745534/transhumanist-novel-by-zoltan-istvan-sparks-intense-dialog
http://hplusmagazine.com/2014/05/12/the-transhumanist-wager-by-zoltan-istvan-review-by-ben-goertzel/

Skeptic Fence Show interview: Zoltan Istvan (Part 1 of 5)
http://www.youtube.com/watch?v=kAWDIHIAIcY

Zoltan Istvan: The Transhumanist Wager Is A Choice We'll All Have To Make
https://www.youtube.com/watch?v=EoxVeu67iD0

jimf said...

> To put the point more concisely, existential-risk discourse seems
> to me an existential risk. . .
>
> [A]t some point throwing real money and directing real media
> spotlights at this nonsense threatens to have an effect in the
> real world.

http://www.patheos.com/blogs/slacktivist/2003/10/17/left-behind-is-evil/
-------------------
The apocalyptic heresies rampant in American evangelicalism are more
popular than ever.

It's easy to dismiss these loopy ideas as a lunatic fringe, but that
would be a mistake. The widespread popularity of this End Times mania
has very real and very dangerous consequences, for America and for
the church. . .
====

Would you like that apocalypse with angels or AIs?

jimf said...

> The reference to Oxford confers an immediate gloss of legitimacy
> on Bostrom's brand of futurism (entirely as it was meant to do),
> and it is important to note that Bostrom is not also described
> in the piece as a transhumanist, and the founder of the
> World Transhumanist Association and writer of the FAQ that
> continues to define that techno-transcendental "movement" for
> many of its members.

Read his new book, if you don't want to be Left Behind!
( http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/ )

http://www.townhallseattle.org/nick-bostrom-the-future-of-artificial-intelligence/
------------------
Within three decades, artificial intelligence might be the
dominant species on Earth. According to Oxford’s Nick Bostrom,
this will be the future — unless humans do something to stop it.
_Superintelligence: Paths, Dangers, and Strategies_ analyzes
the fundamental question of whether or not AI is a friend
or foe to human intelligence. With increased advances in
science and technology, Bostrom says it’s possible humans
will become dependent on this superintelligence down the road.
He’ll offer concrete strategies — such as creating a
“seed artificial intelligence” — for containing and combating
the precarious waters of humanity’s future. A professor
of philosophy at the University of Oxford, Bostrom is also
Director of the Programme on the Impacts of Future Technology.
====

"Seed AI" -- now where have I heard that before?
Oh, yeah:

http://wiki.lesswrong.com/wiki/Seed_AI
------------------
A Seed AI (a term coined by Eliezer Yudkowsky) is an
Artificial General Intelligence (AGI) which improves itself by
recursively rewriting its own source code without human intervention.
Initially this program would likely have a minimal intelligence,
but over the course of many iterations it would evolve to
human-equivalent or even trans-human reasoning. The key for
successful AI takeoff would lie in creating adequate starting
conditions.
====

"Adequate starting conditions" -- for an AI Institute (or an
AI Church) if not an AI.

jimf said...

Left Behind, Right Behind, whose behind?

http://turingchurch.com/2014/09/09/religion-as-protection-from-reckless-pursuit-of-superintelligence-and-other-risky-technologies/
-----------------
Religion as protection from reckless pursuit of superintelligence
and other risky technologies
September 9, 2014
Giulio Prisco

I think religions that provide hope in personal resurrection – either
traditional religions based on the “supernatural” or modern,
Cosmist religions based on science, might be our best protection from
reckless pursuit of superintelligence and other risky technologies. . .

Today many imaginative scientists and science-literate laypersons,
who could appreciate Nick’s arguments, believe that death is final.
They feel doomed to the irreversible non-existence of certain death,
unless the superintelligence explosion happens in their lifetime,
and therefore they want to push forward recklessly, as fast as
possible.

On the contrary, those who hope to be resurrected after death, by
either supernatural agencies or future science and technology, do
not feel the same urgency to accelerate at all costs (this is
my case). Therefore I think religion, or forms of scientific
spirituality that offer hope in personal resurrection and afterlife,
can help.
===

http://skefia.com/2014/08/28/thoughts-on-bostroms-superintelligence/
-----------------
Thoughts on Bostrom’s ‘Superintelligence’
August 28, 2014
Giulio Prisco

. . .

I have known Nick for many years, since way back when he was a
young transhumanist thinker with very wild ideas. Nick co-founded
the World Transhumanist Association (now Humanity+) and the
Institute for Ethics and Emerging Technologies (IEET), and I had
the honor of serving with him in both organizations. Now, Nick
plays in a higher league, as Director of the prestigious
Future of Humanity Institute at Oxford University.

Bostrom doesn’t identify as a transhumanist. “[I]t is very much not
the case that I agree with everything said by those who flock under
the transhumanist flag,” he says on his website. But he is persuaded
that strong machine intelligence and mind uploading can developed,
probably in this century, and result in superintelligence.
He is noncommittal about the precise timeline, but my understanding
is that he thinks that the first human-equivalent new form of
intelligence may be developed sometime in the second half of
this century, with the possibility of a very fast transition
to superintelligence soon thereafter. The transition mechanism
was described by Vernor Vinge and, before him, I. J. Good (1965). . .

The book is dedicated to the control problem: how to keep future
superintelligences under control. . .

I tend to think that controlling a superintelligence may be impossible
in principle, for the same reason why beetles could not control
a person, even if they do their very best (that is, the best that
beetles can do). . .
====

jimf said...

> I did a little more digging and discovered soon enough that
> when Jaan Tallinn isn't giving his money to futurologists
> who would worry us about Robocalypse he is devoted to the
> work of "a medical-consulting firm" called MetaMed which he
> founded and which received an infusion of start-up cash from
> none other than market-libertopian and singularitarian
> Robot Cultist Peter Thiel.

http://www.overcomingbias.com/2013/03/rah-second-opinions.html
--------------
Some high status members of this rationalist community (Peter Thiel,
Jaan Tallin, Zvi Mowshowitz, Michael Vassar) have a new medical startup,
MetaMed, endorsed by other high status members (Eliezer Yudkowsky,
Michael Anissimov). (See also this coverage.) You tell MetaMed your troubles,
give them your data, and pay them $5000 or $200/hour for their time
(I can’t find any prices at the MetaMed site, but those are numbers
mentioned in other coverage). MetaMed will then do “personalized research,”
summarize the literature, and give you “actionable options.”
Presumably they somehow try to stop just short of the line of
recommending treatments, as only doctors are legally allowed to do that.
But I’d guess you’ll be able to read between the lines.
====

jimf said...

Is there a doctor in the house?

https://web.archive.org/web/20130302221710/http://www.metamed.com/our-scientists-doctors-researchers

"Founder" Michael Vassar
( http://en.wikipedia.org/wiki/Michael_Vassar )
used to run the Singularity Institute for Artificial Intelligence
(later called the Singularity Institute, still later the Machine Intelligence
Research Institute).

"Chief Executive Officer" Zvi Mowshowitz used to be best known
for his involvement with a Dungeons & Dragons-style
role-playing card game called "Magic: The Gathering"
(sounds like that vampire saga "Kindred: The Embraced", or for
that matter a diner that existed 20 years ago on Route 3
in New Jersey called "Claremont: The Diner". I guess "X: The Y"
sounds cooler than "the X Y"). One of my New York City
acquaintances knows this guy's family, and used to babysit him
before he got Bayesian rationality and became a CEO. He's also mentioned
in this article about the LessWrong enclave in New York City:

http://betabeat.com/2012/07/singularity-institute-less-wrong-peter-thiel-eliezer-yudkowsky-ray-kurzweil-harry-potter-methods-of-rationality/
---------------
The situation on Alyssa Vance’s couch would have been best described
as a cuddle puddle—a tangle of hair-petting and belly-stroking and
neck-nuzzling, seven people deep. . .

The partygoers had a more solemn connection than their youthful PDA
might suggest. They were all disciples of the blog Less Wrong. . .

Considerably more radical than Kurzweil, Less Wrong is affiliated
with the Singularity Institute in Berkeley. Both were cofounded
by 32-year-old [as of early 2012] Eliezer Yudkowsky, an eighth-grade
dropout with an IQ of 143 (though he claims that might be a lowball
figure). The messianic Mr. Yudkowsky also helped attract funding
from his friend Peter Thiel. . .

While Mr. [Ray] Kurzweil has generally been viewed as the Singularity’s
chief standard-bearer, on the geekier fringe, that distinction belongs
to Mr. Yudkowsky. . .

Mr. Yudkowsky instituted a ban from the Less Wrong forums of a particularly
insidious discussion thread, ominously nicknamed “the Basilisk,”
after science fiction writer David Langford’s notion of images that
crash the mind. In the initial post, a prominent Less Wrong contributor
[Roko Mijic] mused about whether a friendly AI—one hell-bent on saving
the world—would punish even true believers who had failed to do everything
they could to bring about its existence, including donating their
disposable income to SIAI. . .

The Observer tried to ask the Less Wrong members at Ms. Vance’s party
about it, but Mr. Mowshowitz quickly intervened. “You’ve said enough,”
he said, squirming. “Stop. Stop.”
====

jimf said...

> Is there a doctor in the house?
>
> https://web.archive.org/web/20130302221710/http://www.metamed.com/our-scientists-doctors-researchers

"Senior Health Researcher and Medical Associate" Dr. Scott Siskind, who
"graduated with honors from University College Cork Medical School in 2012,
and won the Quantified Health Prize that same year[,]. . . currently works
for a nonprofit dedicated to improving human rationality[,]. . . [and] has
BAs in Psychology and Philosophy from Hamilton College." is a prolific
contributor to both LessWrong and Robin Hanson's blog "Overcoming Bias".
He's also a prolific blogger in his own right, as "Scott S. Alexander"
at "Slate Star Codex" (e.g., http://slatestarcodex.com/2014/10/07/tumblr-on-miri/ ),
and has, or had, a personal Web site
( https://web.archive.org/web/20140109225058/http://raikoth.net/ ),
and has, or had, a Live Journal account
( http://squid314.livejournal.com/ ). And, hold onto your hats,
he's a psychiatrist:
http://www.healthgrades.com/provider/scott-siskind-y9v75tz
Imagine having a shrink who's Less Wrong. The mind boggles!
(No worse though, I guess, than having a shrink who's an Objectivist.
I suppose you can't actually have a shrink who's a Scientologist, though.)

I wonder if Jaan Tallinn, or Peter Thiel, is gonna buy MetaMed a Watson.
http://www.theregister.co.uk/2014/08/28/ibm_watson_scientific_research_analysis/

jimf said...

It's a small world.

Jeremy Cooper of LGBT-advocacy blog "Good As You" posted an
article a couple of days ago about Brendan Eich (erstwhile
head of Mozilla [and originator of Javascript] who stepped
down amid controversy over inferences about his attitudes
toward LGBT folks drawn from his political contributions):
http://www.goodasyou.org/good_as_you/2014/10/brendan-eich-found-doma-unconstitutional.html

Following a link on that page leads to a comment by Eich in
response to an article by none other than "Scott S. Alexander"
(Dr. Scott Siskind, of MetaMed) on his blog "Slate Star Codex".

I haven't read the whole article, but it seems, among other
things, to be a defense of "Red Tribe" members being
oppressed by the Politically Correct tyranny of the
Blue Tribe:

http://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/#comment-151728
---------------
Sure enough, if industry or culture or community gets Blue enough,
Red Tribe members start getting harassed, fired from their jobs
(Brendan Eich being the obvious example) or otherwise shown the door.

Think of Brendan Eich as a member of a tiny religious minority surrounded
by people who hate that minority. Suddenly firing him doesn’t
seem very noble. . .

When a friend of mine heard Eich got fired, she didn’t see anything
wrong with it. “I can tolerate anything except intolerance,” she said.

“Intolerance” is starting to look like another one of those words
like “white” and “American”.

“I can tolerate anything except the outgroup.” Doesn’t sound quite so
noble now, does it?
====

Mr. "Alexander" is ostensibly trying to make a larger point here about the
thoughtless, knee-jerk rejection and harassment of outgroup members (of any
stripe) by members of an in-group, but his example is unfortunate.

It sounds like the claims coming from certain quarters that
(right-wing) "Christians" are now an oppressed group in
this country.

Also, it's my impression that some "rationalists" and "skeptics"
relish adopting the role of the tough-minded contrarian willing
to poke holes in contemporary "liberal pieties" in order
to pursue an abstract argument about ethical consistency or
human nature or whatever. There was a time when I myself
might have enjoyed that game, but now I'm more sensitive
to the unspoken undertones or overtones or metamessages or
(what the folks over at Overcoming Bias would call) "signalling".

Do you hear what I hear?

(Oh Jeez, some of the comments come from the Neoreaction folks.
Are we surprised? :-/ )

And,
---------------
Brendan Eich says:
October 1, 2014 at 8:56 pm

. . .

I’m a S[ocial]J[ustice]W[arrior] in my own way, even
though I’m not Blue or Red Tribe (more Purple).
I care about social justice, but I don’t share the axioms
or tactics of the loud/left/cultural-Marxist SJWs. I don’t
like that acronym, even though it has “stuck”. More people
care about social justice, and want to wage just war against
social ills, than just that one noisy “side”.
====

Esebian said...

Meanwhile, Dvorsky stamps his foot and demands we finally get our lazy asses off the couch. Just where are all my cool scifi gizmos, dammit?!

http://io9.com/12-technologies-we-need-to-stop-stalling-on-and-develop-1644404121

My favorite bits are how he wants to eugenics the fuck out of wildlife and to "accurately measure rationality". "Rational" is nothing more than Internet code for fringe beliefs at this point, isn't it? The whole thing is one big ad for his fellow very serious futuremen.

Is there no way to excise this tumor from io9? I think he had a forum to spout his crackpot nonsense long enough.

jimf said...

> "Rational" is nothing more than Internet code for fringe beliefs
> at this point, isn't it?

Either that, or it's the rallying cry of the new Thought Leaders of
the Age. YMMV.

http://forums.somethingawful.com/showthread.php?threadid=3627012&userid=0&perpage=40&pagenumber=45
(and cf. http://slatestarcodex.com/2014/10/07/tumblr-on-miri/ )
--------------------
So the Slatestarcodex guy [MetaMed "Senior Health Research and Medical
Associate" Dr. Scott Siskind, a.k.a "Scott S. Alexander"]
responded to something I wrote on tumblr with this. . .

> Over the last decade:
>
> 1. A whole bunch of very important thought leaders including Stephen Hawking,
> Elon Musk, Bill Gates, Max Tegmark, and Peter Thiel have publicly stated they
> think superintelligent AI is a major risk. Hawking specifically namedropped MIRI;
> Tegmark and Thiel have met with MIRI leadership and been convinced by them.
> MIRI were just about the first people pushing this theory, and they’ve
> successfully managed to spread it to people who can do something about it.
>
> 2. Various published papers, conference presentations, and chapters in textbooks
> on both social implications of AI and mathematical problems relating to
> AI self-improvement and decision theory. Some of this work has been receiving
> positive attention in the wider mathematical logic community. . .
>
> 3. MIT just started the Future of Life Institute, which includes basically a
> who’s who of world-famous scientists. Although I can’t prove MIRI made this
> happen, I do know of that of FLI’s five founders I met three at CFAR
> [Machine Intelligence Research Institute's spinoff, the Center for
> Applied Rationality] workshops a couple years before, one is a long-time
> close friend of Michael Vassar’s, and I saw another at Raymond’s New York Solstice.
>
> 4. A suspicious number of MIRI members have gone on to work on/help lead
> various AI-related projects at Google.
>
> 5. Superintelligence by Bostrom was an NYT bestseller reviewed in the Guardian,
> the Telegraph, the Economist, Salon, and the Financial Times. Eliezer gets
> cited just about every other page,and in MIRI HQ there is a two-way videoscreen
> link from them to Nick Bostrom’s office in Oxford because they coordinate
> so much. Searching the book’s bibliography for citations of MIRI people
> I find Stuart Armstrong, Kaj Sotala, Paul Christiano, Wei Dai, Peter de Blanc,
> Nick Hay, Jeff Kaufman, Roko Mijic, Luke Muehlhauser, Carl Shulman,
> Michael Vassar, and nine different Eliezer publications.
>
> My impression as an outsider who nevertheless gets to talk to a lot of people
> on the inside is that their two big goals are to work on a certain abstruse
> subfield of math, and to network really deeply into academia and Silicon Valley
> so that their previously fringe AI ideas get talked about in universities,
> mainstream media, and big tech companies and their supporters end up highly
> placed in all of these. . .
>
> I bet that ten years ago, I could have made you bet me at any odds that this
> weird fringe theory called “Friendly AI” invented by a guy with no
> college degree wouldn’t be on the lips of Elon Musk, Stephen Hawking,
> half of Google’s AI department, institutes at MIT and Oxford, and scattered
> throughout a best-selling book.
>
> Networking is by its nature kind of invisible except for the results, but
> the results speak for themselves. . .
====

Revenge of the nerds. ;->

Dale Carrico said...

There are no thought leaders. Thought isn't going anywhere. As someone somewhere snarked.

jimf said...

> So the Slatestarcodex guy [MetaMed "Senior Health Research and Medical
> Associate" Dr. Scott Siskind, a.k.a "Scott S. Alexander"]
> responded to something I wrote on tumblr with this. . .
>
> > 4. A suspicious number of MIRI members have gone on to work on/help lead
> > various AI-related projects at Google. . .

http://forums.somethingawful.com/showthread.php?threadid=3627012&userid=0&perpage=40&pagenumber=45
----------------
SolTerrasa
Oct 4, 2014

I'm going to start by talking about the Google bits. Standard disclaimers:
I speak for myself, not Google, I'm just some dude who works there and
I'm not PR-approved. , ,

For this to make sense you have to understand a couple of things about
Google. Number one is that we don't get orders from on high very often.
We might get a goal, . . . and then that will filter down through the levels
and be increasingly refined by people who are closer and closer to the problem.
The goals for people at the very leaf nodes of the org tree are still
often freakishly abstract, and you can use pretty much any kind of solution
you want. . . Last quarter, I decided I really liked AI and
M[achine]L[earning], so I solved a bunch of problems with AI and ML. That's
what I mean when I say I'm working on AI projects at Google; it's not like
I'm trying to build a sentient lifeform or whatever. So he [Scott Siskind,
a.k.a. Scott S. Alexander] might be telling the truth, but it's not
likely that it **means** anything.

So let's take "leading AI projects at Google" to mean "works at Google, likes AI".
Basically everything at Google is confidential in one way or another, so
I probably can't even confirm what people are working on, or what our AI
projects are, or whatever. This is why the MIRI people think we have all
these AI projects; because they would if they had our nigh-unlimited
resources, and no one will ever deny it, so therefore these projects exist. . .

I don't really understand what he's implying. Is he implying that Google
is poaching Friendly AI researchers for our own FAI project? Is he implying
that the people who work on FAI are mostly smart enough to work at Google?
I have very different things to say about those two things.

Needless to say, we're probably not poaching people for Friendly AI.
If we were I think I'd have noticed. Though it is loving hilarious that
he noticed "people are leaving MIRI for Google" and didn't connect it to
"because MIRI is a poo poo organization to work for if you're brilliant,
and Google isn't." Instead in his head it's because they've been poached
to work on Google's own friendly AI.

Also, I want to point out that of the many people at Google I've interacted
with on AI topics, basically none knew about Yudkowsky, most of the rest thought
his ideas were hilarious, exactly one took him seriously, and a tiny fraction
thought he was wrong but that we shouldn't mock him. He doesn't enter the
discourse here hardly ever, so that tells you something about how effective
his "networking" has been.

In stack ranking terms my vague guess would be that Yudkowsky ranks above
"faith healers", but below "climate change deniers" in terms of Engineering-wide
Googler belief (I don't know about marketing/sales/ whatever), and below
all those in terms of Eng[ineering]-wide Googler awareness/engagement.

But again, I don't speak for Google, just my own impressions of the culture here. . .
====

jimf said...

> Read [Nick Bostrom's] new book, if you don't want to be Left Behind!
> ( http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/ )
> . . .
> MetaMed "Senior Health Research and Medical
> Associate" Dr. Scott Siskind, a.k.a "Scott S. Alexander" [wrote:]
>
> > 1. A whole bunch of very important thought leaders including Stephen Hawking,
> > Elon Musk, Bill Gates, Max Tegmark, and Peter Thiel have publicly stated they
> > think superintelligent AI is a major risk. . .
>
> > 5. Superintelligence by Bostrom was an NYT bestseller reviewed in the Guardian,
> > the Telegraph, the Economist, Salon, and the Financial Times. Eliezer gets
> > cited just about every other page,and in MIRI HQ there is a two-way videoscreen
> > link from them to Nick Bostrom’s office in Oxford because they coordinate
> > so much. Searching the book’s bibliography for citations of MIRI people
> > I find Stuart Armstrong, Kaj Sotala, Paul Christiano, Wei Dai, Peter de Blanc,
> > Nick Hay, Jeff Kaufman, Roko Mijic, Luke Muehlhauser, Carl Shulman,
> > Michael Vassar, and nine different Eliezer publications.

IEET gives air-time to a naysayer and Singularity party-pooper:

http://ieet.org/index.php/IEET/more/scaruffi20141012
Book review: Nick Bostrom’s “Superintelligence”
by Piero Scaruffi
Oct 12, 2014

(scaruffi is the author of
http://www.amazon.com/Demystifying-Machine-Intelligence-Piero-Scaruffi/dp/0976553198/ )

Dale Carrico said...

Eliezer gets cited just about every other page

I seem to recall that Yudkowsky first claimed he didn't need to get a degree in any of the fields on which he illiterately pontificates because the singularity was so near it would be a waste of time. Of course nowadays Robot Cultists like Bostrom who managed to do the work to get into the academy are so busy enabling Yudkowsky as fellow-faithful, getting him publications and citations and speaking gigs, that it remains a waste of time for Yudkowsky to set aside the guru-gig and actually see if his marginal convictions would long survive unqualified were he to go through the long slog of engaging with people who actually know what they're talking about. I actually do not doubt Yudkowsky is smart enough to benefit from an actual degree program and engagement with real research. The path he is on does damage to the world, but also to himself as far as I'm concerned. A caveat though: philosophy departments will obviously let anybody through (even me), and Yudkowsky would not be helped in the least by a degree in philosophy which he then treated as an endorsement of his skills in computer science or his knowledge of physics. It is another sign of the extreme multi-generational decline into crisis of Anglo-American analytic philosophy that it can no longer insulate itself from futurology being done under its auspices.

Daniel said...

Update on Metamed:

It folded in less then three years.

Termination

By 2015, MetaMed was defunct, which Tallinn attributed to limited interest in the service and the organization's lack of expertise.[1]

https://en.wikipedia.org/wiki/MetaMed