Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Saturday, January 02, 2016

Credularity

I wonder if Ray Kurzweil will ever publish the only prediction his conduct reveals he truly believes: There's a sucker born every minute.

11 comments:

Unknown said...

Yeah! I mean the dude is seriously into supplements that are proven pseudo science. It is rare for someone drinking one form of kool aid not drinking more.

jimf said...

> There's a sucker born every minute.

Of course, the poor guy is likely enough suckering **himself**.

I just saw this go by in the neighborhood of an
item in your twitter-scroll:

http://www.theverge.com/2015/12/29/10642070/2015-theranos-venture-capital-tech-bubble-disruption
-------------
Silicon Valley is confusing pseudo-science with innovation

This year we saw what happens when you mix venture capital
with dubious health startups

By Ben Popper and Elizabeth Lopatto
December 29, 2015 10:30 am

. . .

On the surface everything about Theranos looked good, right?
It wasn’t until after The Wall Street Journal dug in that all
the irregularities in partnerships, relationships with regulators,
and general fuckery began to surface. . .
====

Theranos was in the NY Times a few weeks ago:
http://www.nytimes.com/2015/12/20/business/theranos-founder-faces-a-test-of-technology-and-reputation.html

Loc. cit.
-------------
The medical field doesn’t move as fast as the software industry
because moving fast and breaking things is fine for things but
not for people. . .

The thing is, I’m not sure Silicon Valley sees the difference. . .
I know some pretty smart health care investors. . . Where’s all
the dumb money coming from? . . .

I prefer cluelessness to the other option I see. Which is that
a bunch of really cynical people saw 23andMe and were like,
right, we’ll just sell 'til they make us stop and **then**
we’ll get serious about the real standards. Did I get too dark?

Silicon Valley has a libertarian streak exemplified in
companies like Uber and Airbnb. Do whatever the customer
likes best, worry about the regulations later. . .

[T]he fact that companies like Alphabet [rejiggered Google]
think life extension is a desirable possibility does not
fill me with confidence about their other projects. We’re
living a lot longer than we used to, but the last years are
also way sicker than they used to be. . . The singularity is
so laughable I don’t even know where to start. It’s like
they’re funding fantasy out there.

In terms of life extension, here are the real opportunities:
closing the gap between black and white patients, lowering
the infant mortality rate, and making sure the very poorest
among us have access to adequate care. You can make sure that
many people live longer, right now! But none of this is quite
as sexy as living forever, even though it’s got a greater payoff
for the nation as a whole. So instead of investing in these areas,
you’ve got a bunch of old white men who are afraid to die
trying to figure out cryonics. They’re being funded by more rich
old white men, who don’t face many of these care gaps and
perhaps do not even know they exist — or don’t care, because
how do you monetize serving the poor? . . .

[I]n the wake of Theranos, I bet there will be less snake oil
and pseudo-science that somehow gets funded. Meanwhile,
Larry and Sergei will keep throwing their money at the
search for eternal life, but eh, who are we to complain?. . .
====

jimf said...

Speaking of "dubious health startups", you know
MetaMed went quietly belly-up early last year (2015).

I found out about that from chitchat on Tumblr last month:

http://reddragdiva.tumblr.com/post/135248856608/argumate-the-metamed-failure-is-a-little-sad
----------
David Gerard
Dec 15th, 2015

argumate:

> The Metamed failure is a little sad. It reminds me of an important couplet:
>
> 1. A sufficiently smart person can do anything!
>
> 2. …with sufficient time and effort, and not as well as an experienced person.
>
> While it is important to remember the first part, you may go astray if
> you forget the second part.

a little sad, a lot of lulz though. there is nothing about this that was not
predictable and (iirc, would need to check LW) predicted.

the sheer multilayered incompetence (yes, they hired a struck off doctor. yes,
they deleted awkward questions on a blog ask-me-anything. yes, they spent
their last days knowingly stiffing people) was pretty good also. it needs
a suitable cataloguing in sneer culture. (i have no idea when i will be
bothered, if anyone else wants to step up.)

frankly, when actually detrimental terrible lesswrong ideas were being put
into practice in medicine, a real-life area that could have hurt people
(and is in no way short of existing cranks and quacks targeting the worried
well), a rapid and hilarious failure was probably the best outcome.
====

In Metamed's defense, from a founder, a post-mortem by Zvi Mowshowitz:
https://thezvi.wordpress.com/2015/06/30/the-thing-and-the-symbolic-representation-of-the-thing/
(via Scott Alexander [Scott Siskind]
http://slatestarcodex.com/2015/08/05/ot25-obon-thread/#comment-224010 )

Good grief, I didn't realize "Zinnia Jones" had been involved
in MetaMed!
http://freethoughtblogs.com/zinniajones/2013/03/metamed-the-best-second-opinion/
(via
http://www.patheos.com/blogs/hallq/2013/03/metamed/ )

Don't leave just yet:

http://reddragdiva.tumblr.com/post/135260842008/urpriest-argumate-wait-now-michael-vassar
----------
urpriest:

> argumate:
>
> > wait, now Michael Vassar is running a company called BayesCraft
>
> Is it an R[eal]T[time]S[strategy game]? Are the resource-gatherers MIRI employees?

it’s ANOTHER MEDICAL COMPANY.

note that bio doesn’t mention metamed.

i’m sure this will all be fine, fine
====

Like a phoenix from the ashes.

jimf said...

From your twitter scroll:

https://twitter.com/mcnees/status/683414818246770688
----------
Robert McNees
@mcnees
2:29 PM - 2 Jan 2016

No. Preventing hypothetical AI disasters is not more important
than addressing poverty.

http://www.vox.com/2015/8/10/9124145/effective-altruism-global-ai
====


http://www.vox.com/2015/8/10/9124145/effective-altruism-global-ai
----------
I spent a weekend at Google talking with nerds about charity.
I came away ... worried.

Updated by Dylan Matthews
August 10, 2015

"There's one thing that I have in common with every person
in this room. We're all trying really hard to figure out how
to save the world." . . .

[Cat] Lavigne was addressing attendees of the Effective Altruism Global
conference, which she helped organize at Google's Quad Campus
in Mountain View the weekend of July 31 to August 2. . .

Effective altruism (or EA, as proponents refer to it) is more
than a belief, though. It's a movement, and like any movement, it
has begun to develop a culture, and a set of powerful stakeholders,
and a certain range of worrying pathologies. At the moment,
EA is very white, very male, and dominated by tech industry
workers. And it is increasingly obsessed with ideas and data
that reflect the class position and interests of the movement's
members rather than a desire to help actual people. . .

I identify as an effective altruist. . . I even think AI risk
is a real challenge worth addressing. But speaking as a white
male nerd on the autism spectrum, effective altruism can't just
be for white male nerds on the autism spectrum. . .

EA Global was dominated by talk of existential risks, or X-risks. The idea
is that human extinction is far, far worse than anything that could
happen to real, living humans today.

To hear effective altruists explain it, it comes down to simple math.
About 108 billion people have lived to date, but if humanity lasts
another 50 million years, and current trends hold, the total number
of humans who will ever live is more like 3 quadrillion. Humans
living during or before 2015 would thus make up only 0.0036 percent
of all humans ever.

The numbers get even bigger when you consider — as X-risk advocates
are wont to do — the possibility of interstellar travel. Nick Bostrom —
the Oxford philosopher who popularized the concept of existential
risk — estimates that about 10^54 human life-years (or 10^52 lives
of 100 years each) could be in our future if we both master travel
between solar systems and figure out how to emulate human brains
in computers.

Even if we give this 10^54 estimate "a mere 1% chance of being correct,"
Bostrom writes, "we find that the expected value of reducing
existential risk by a mere one billionth of one billionth of one
percentage point is worth a hundred billion times as much as a
billion human lives." . . .
====

Loc. cit.
----------
Robert McNees
@mcnees

This hijacking of effective altruism is a silly, privileged
comp-sci fantasy. Also, their math makes no sense.

---

drmagoo
@drmagoo

And they base their premise on practical interstellar travel?
Why not include warp drives and humans living 1000 years?

---

James
@jmac_ai

FWIW, effectively zero of my AI research colleagues share the
existential threat fear. It seems to mostly come from outside.
====

https://twitter.com/firepile
----------
Robin Z.
@firepile

"But those probability values are literally just made up."
YES THEY ARE. I'm so embarrassed to work in AI these days
====

I miss seeing Robin Z. in the Moot!

Was it the moon?
No, no. The Bossa Nova!
The stars above?
No, no. The Bossa Nova!
Was it the tune?
Yeah, yeah. The Bossa Nova!
The dance of love.

;->

jimf said...

> . . . post-mortem of Metamed by founder Zvi Mowshowitz. . .
> https://thezvi.wordpress.com/2015/06/30/the-thing-and-the-symbolic-representation-of-the-thing/
> https://thezvi.wordpress.com/2015/05/15/in-a-world-of-venture-capital/

I.e., this guy (via
http://amormundi.blogspot.com/2014/02/ahems.html
http://amormundi.blogspot.com/2014/10/very-serious-robocalyptics.html )

https://web.archive.org/web/20120728023014/http://betabeat.com/2012/07/singularity-institute-less-wrong-peter-thiel-eliezer-yudkowsky-ray-kurzweil-harry-potter-methods-of-rationality
----------------

. . .

“The AI is smarter than we are, so it would kill everyone.
Or it wants all our resources, so of course it’s going to kill everyone,”
Zvi Mowshowitz explained as the assembled rose from the couch to whoop it up to
show tunes and eighties pop hits. Mr. Mowshowitz, who lives a couple
floors up at The Caroline with his girlfriend (the neuroscientist),. . .
was wearing electric blue gym shorts and a homemade T-shirt commemorating his reign
as a professional champion of the Magic: The Gathering fantasy card game.
Mr. Mowshowitz is currently working with Ms. Vance and Jaan Tallinn,
the renowned Estonian programmer behind Skype and Kazaa, on a personalized
medicine startup. . .

“I’ve made my peace with the fact that, you know, **this** is not going to last,”
Mr. Mowshowitz said, looking out the window at weekend traffic on
Sixth Avenue as though it would all disappear. “We have a very dysfunctional
civilization right now. There are better things that could be done.” . . .

The people behind SIAI. . . are actively engaged in reframing Armageddon.
On the webpage “Why Work Toward the Singularity,” SingInst offers a
gloriously transcendent vision of AI as mankind’s salvation. . .
Meanwhile, cohorts focused on anti-aging, nanotechnology,
longevity and transhumanism are at work on genetic therapies and body-hacks
that will extend our lifespans beyond those of the vampire population of
True Blood.

Mr. Mowshowitz calls it escape velocity. “That’s where medicine is
advancing so fast that I can’t age fast enough to die,” he explained.
“I can’t live to 1,000 now, but by the time I’m 150, the technology
will be that much better that I’ll live to 300. And by the time I’m 300,
I’ll live to 600 and so on,” he said, a bit breathlessly. “So I can
just . . . escape, right? And now I can watch the stars burn out in the
Milky Way and do whatever I want to do.” . . .

[Alyssa] Vance, who glided around the room with the head-bob and
muffled laugh of a very polite alien, interrupted Mr. Mowshowitz to
share the business card of a “cryo life insurance guy.” Not necessary;
he was already covered. . .

While [Ray] Kurzweil has generally been viewed as the Singularity’s
chief standard-bearer, on the geekier fringe, that distinction belongs
to [Eliezer] Yudkowsky. . .

Mr. Yudkowsky instituted a ban from the Less Wrong forums of a particularly
insidious discussion thread, ominously nicknamed “the Basilisk,”
[A] prominent Less Wrong contributor [Roko Mijic] mused
about whether a friendly AI—one hell-bent on saving
the world—would punish even true believers who had failed to do everything
they could to bring about its existence, including donating their
disposable income to SIAI. . .

The Observer tried to ask the Less Wrong members at Ms. Vance’s party
about it, but Mr. Mowshowitz quickly intervened. “You’ve said enough,”
he said, squirming. “Stop. Stop.”
====

So how come Peter Thiel didn't buy (or rent) Metamed a Watson? ;->

jimf said...

Foom and doom.

http://www.overcomingbias.com/2014/07/30855.html
-----------
I Still Don’t Get Foom
By Robin Hanson
July 24, 2014

. . .


dmytryl

I came to realize that there's nothing really to get.
It's the second coming, the 'skynet awakening',
a part of our cultural heritage, and it is something strongly
preferred by narcissistic minds engaged in
"fantasies of unlimited power, success, intelligence".
But that's all there [is] to it.

---

Stephen Diamond

My diagnosis is that its an effort of some newly atheistic folks
to cope with their fear of death that, because of their previous
religiosity, they've never mastered. Same with cryonics.
====

jimf said...

So last year saw a lot of hoopla surrounding Nick Bostrom's
_Superintelligence_.

But the book I'm really waiting for will be written by
a **disaffected** Singularitarian or Lesswrongian -- somebody
like Dmytry Lavrov, Alexander Kruel, David Gerard,
Chris Hallquist -- heck, maybe even Dale Carrico. ;->

I'm hoping it will be a jewel of what the LWians and
SlateStarCodexians call "sneer culture".

Hey, you know who would be the perfect author for such
a book? Roko Mijic! His name has already been immortalized
on the Web -- a book repudiatiating his youthful folly would
be guaranteed an instant audience! Roko -- your destiny
is calling! Here's the outline:

http://www.patheos.com/blogs/friendlyatheist/2016/01/04/this-is-why-i-joined-the-church-of-scientology-as-a-teenager/
------------------
This is Why I Joined the Church of Scientology as a Teenager
January 4, 2016
by Hemant Mehta

Chris Shelton spent 27 years in the Church of Scientology
before finally coming to his senses and leaving. It’s been
three years now since he left and he’s doing everything he
can to make sure no one else makes the same mistake.

His new book detailing his experience within the organization
is called _Scientology: A to Xenu: An Insider’s Guide to What
Scientology is All About_ . . .

Was I a stupid person? No, I wasn’t. I was quite smart
actually and got good grades and even at 15 years old,
I knew a lot of things. I wasn’t stupid but I was naïve
and reckless. I didn’t know con men really existed, people
can lie without knowing they are lying and just because
someone says they are your friend, doesn’t make it so.
If I have any “weakness” in my life, it’s my trust and
optimism. Even after everything I’ve been through, I still
believe people are basically good and are worthy of my trust
before they prove me wrong. Of course, that is now tempered
by all my experience so I’m no fool anymore but back then,
yeah. I was a fool. I was also desperate to be thought well
of and I would do anything, repeat anything, to be popular.
====

jimf said...

> http://www.patheos.com/blogs/friendlyatheist/2016/01/04/this-is-why-i-joined-the-church-of-scientology-as-a-teenager/
> Chris Shelton spent 27 years in the Church of Scientology. . .

He has an interesting YouTube channel:
https://www.youtube.com/channel/UCF326xyA0QHI7Z5xAwKQDJg/videos

Also on YouTube:
( via
http://tonyortega.org/2015/04/23/see-going-clear-star-hana-whitfield-describe-l-ron-hubbard-in-a-leaked-1997-interview/ ):

Hana Eltringham Whitfield was one of the stars of Alex Gibney’s documentary
about Scientology, Going Clear. Her memories as the captain of
L. Ron Hubbard’s flagship as he ran Scientology from sea in late 1960s
and early 1970s was one of the highlights of the film. Now, we have
a leak from a previous documentary, UK Channel 4’s excellent 1997
Secret Lives — L. Ron Hubbard, which featured Hana prominently.

Here, for the first time, is a much more complete version of her
interview, just one of many outtakes from the 1997 documentary that
a source has been making available for us.

https://www.youtube.com/watch?&v=qw-At2NNyZo
Hana Eltringham Whitfield - L Ron Hubbard's
Ship Captain - Secret Lives - Scientology - Dianetics
Published on Apr 23, 2015

Fascinating stuff.

jimf said...

http://www.vice.com/en_ca/read/theres-something-weird-happening-in-the-world-of-harry-potter-168
--------------
The Harry Potter Fan Fiction Author Who Wants to Make
Everyone a Little More Rational
By David Whelan
March 2, 2015

. . .

On March 14, the most popular Harry Potter book you've never
head of, Harry Potter and the Methods of Rationality, will
come to its conclusion. It has been running online as a
fan fiction for the past five years. It is 600,000 words
long and contains 112 chapters. By the end, we'll be looking
at a grand total of 700,000 words and 125 chapters. This
will put it somewhere between Gravity's Rainbow and Route 66
in terms of length.

It has over 7,000 Reddit fans, 26,000 reviews, and a fan-made
audiobook.

There will be worldwide wrap parties to celebrate its culmination. . .

This new Potter. . . [is] basically the Jesus Christ of Rational Thought.
He owns this book. He hits Voldemort out of the fucking park with a
bunt while scratching his ass with his foot. And -- here's the kicker—if
you start copying him -- that is, making rational decisions that overcome
cognitive biases -- you, too, can make life your bitch.

Welcome to the world of rational thinking, the art of being Less Wrong. . .

Taking a read of his website, it becomes quite clear that Yudkowsky
is not your average fan fiction author. He is far more likely to talk
about the Twelve Virtues of Rationality than how sad he was when
Dumbledore died. His updates for his fan fiction include links to
a place called the Center for Applied Rationality, where he is a
Curriculum Consultant. . .

There's a curious correlation between the work at CFAR and that
which occurs at Yudkowsky's day job at the Machine Intelligence
Research Institute, whose main goal seems to be to ensure that
Skynet never happens. The former helps make humans think like
machines. The latter makes sure super smart computers think
like us. . .

The website for CFAR reveals a lot about the aims of the association—helping
people overcome flawed thinking to self-improve. . .

Make no mistake, this is a self-help system, just as something like
Dianetics originally was. . .

If this all sounds slightly cultish to you— a sacred text, a big
bold call out for test subjects, the promise of a happier life,
the call for donations on top of fees—that's because there are
similarities here to the growth of other belief structures.
Only in Silicon Valley would we get a group that treats the
human mind like an app. . .

When, in the 1930s [ https://en.wikipedia.org/wiki/Dianetics#History ],
science fiction author L. Ron Hubbard began work on _Dianetics: The Modern
Science of Mental Health_, no one could have anticipated his brand of self-help
later becoming the center of a multimillion-dollar religion. It's strange, but
it doesn't seem a stretch to say there are echoes of that
movement here. . .
====

jimf said...

> The Harry Potter Fan Fiction Author Who Wants to Make
> Everyone a Little More Rational. . .
>
> When, in the 1930s, science fiction author L. Ron Hubbard began
> work on _Dianetics: The Modern Science of Mental Health_, no
> one could have anticipated his brand of self-help later becoming
> the center of a multimillion-dollar religion. It's strange,
> but it doesn't seem a stretch to say there are echoes of that
> movement here. . .

https://www.reddit.com/r/scientology/comments/39ssuw/structural_functionalist_analysis_of_cults_and/
---------------
Echo1883

> do you think cults like scientology exist because they serve
> a function for a certain type of person?

Yes. Scientology served Hubbard and later Miscavige. Mormonism served Smith,
Young and each president since. Ramtha's School of Enlightenment serves
JZ Knight. The Machine Intelligence Research Institute (Less Wrong) serves
Eliezer Yudkowsky. Heaven's Gate served Marshall Applewhite.
The People's Temple served Jim Jones.

Each cult is created to serve the founder and then continues to serve the
wishes of the leaders after them if the cult manages to survive the death
of the founder. Most often this is power or money. Often both. For example
Smith was a charged for running a scam using the very stones he later claimed
allowed him to translate the golden plates. During his work "translating"
the golden plates that he claimed existed he convinced a poor fellow to
mortgage his farm for what would today be hundreds of thousands to fund
his publication of the Book of Mormon. Another example would be David Miscavige
who currently lives in luxury while his Sea Org slaves work absurd hours in
horrible conditions with living conditions that I believe are nothing short
of a violation of a person's basic human rights. These cults serve to bring
the leaders wealth at the expense of their followers, power due to absolute
devotion from their adherents, and they serve to glorify the individual. Smith
claimed that no other man, other than Jesus, had ever or will ever do more
for mankind's salvation than he himself had done, Hubbard claimed to be the
only person to ever provide a chance for mankind to save itself from eternal
imprisonment here on Earth.

> do you think institutions like scientology offer a complex grand narrative
> for the person who needs this type of order?

I believe this of religion in general. And no more so for cults than for other
religions. Rather, the individual cultist is often taught that THEY are something
special and that the general order of society does not apply to them. They
are often taught that the complex society is just a deviation from some simpler
way of life available by following some new teachings.
====

jimf said...

It seems that over at Pharyngula, P. Z. Myers had a thing
or two to say the other day about "Effective Altruism" as equivalent to
"donating to a cult that's going to save the world from killer robots".

http://freethoughtblogs.com/pharyngula/2016/01/03/are-these-people-for-real/

(via
http://reddragdiva.tumblr.com/post/136752194363/sigmaleph-reddragdiva-sigmaleph-oh-god-pz )

From the above (David Gerard's Tumblr post):

"while miri has yet to be laughed out, that is in fact the most important
thing to say. which is why it was [dylan] matthews’ key point too
[ http://www.vox.com/2015/8/10/9124145/effective-altruism-global-ai ]. . .

and that’s without even addressing your implicit assumption that
giving miri money does any actual thing about any actual existential risk,
rather than funding fanfic, blogging and an impossibly slow drip of
minimum-publishable-unit papers they don’t even bother to get properly
published. miri is a rabbit hole. the make a wish foundation is
**literally more effective on any measurable level** than miri: they
have clear, achievable aims and achieve them regularly."