Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Tuesday, October 03, 2017



jimf said...

Fake News from today's Failing New York Times.
After Las Vegas Shooting, Fake News Regains Its Megaphone
OCT. 2, 2017

When they woke up and glanced at their phones on Monday morning,
Americans may have been shocked to learn that the man behind
the mass shooting in Las Vegas late on Sunday was an anti-Trump
liberal who liked Rachel Maddow and, that the F.B.I.
had already linked him to the Islamic State, and that
mainstream news organizations were suppressing that he had
recently converted to Islam.

They were shocking, gruesome revelations. They were also
entirely false — and widely spread by Google and Facebook.

In Google’s case, trolls from 4Chan, a notoriously toxic
online message board with a vocal far-right contingent,
had spent the night scheming about how to pin the shooting on
liberals. . .

In addition, some users saw a story on a “trending topic”
page on Facebook for the shooting that was published by Sputnik,
a news agency controlled by the Russian government.
The story’s headline claimed, incorrectly, that the F.B.I.
had linked the shooter with the “Daesh terror group.”

Google and Facebook blamed algorithm errors for these. . .

But this was no one-off incident. Over the past few years,
extremists, conspiracy theorists and government-backed propagandists
have made a habit of swarming major news events, using search-optimized
“keyword bombs” and algorithm-friendly headlines. These organizations
are skilled at reverse-engineering the ways that tech platforms
parse information, and they benefit from a vast real-time amplification
network that includes 4Chan and Reddit as well as Facebook, Twitter
and Google. Even when these campaigns are thwarted, they often last
hours or days — long enough to spread misleading information to
millions of people.

The latest fake news flare-up came at an inconvenient time for
companies like Facebook, Google and Twitter, which are already
defending themselves from accusations that they have let malicious
actors run rampant on their platforms. . .

Boy, the word "algorithm" has connotations these days undreamed-of
when I first learned it in computer science classes almost 40 years ago.

jimf said...

So I've recently been reading _Let's sell these people
A Piece of Blue Sky: Hubbard, Dianetics and Scientology_
by Jon Atack

I first came across the author being interviewed on ex-Scientologist
Chris Shelton's YouTube channel. I'm continually amazed (and
appalled) by the fact that **only** the advent of the internet --
purely a technological development, not a political or social
innovation per se -- has finally begun to weaken the power of cults like
Scientology, and their narcissistic gurus. Prior to the internet
becoming a consumer information source, a decade or two ago,
the so-called free press, and the mechanisms of civil society --
courts, legislatures, government bureaucracies,
law-enforcement -- were appallingly helpless in the face of such
a well-organized cult. There's been a similar internet-fueled
empowerment of on-line communities of ex-members of other cults
and/or religious groups -- the Mormons, the Jehovah's Witnesses,
and many other less well-known groups. (I often enjoy dipping
into the archive of John Larsen's Mormon Expression podcasts ).

So anyway, apropos of both cults and AI, I was desultorily Googling
Scientology-related stuff the other day, and I came across this
entertaining document:
An Open Letter to Eliezer Yudkowsky
Professor J. Moriarty
Feb 2016

You will have to explain why you are directing your pet administrators
in wikipedia to censor the criticism section that contained references
to the recent blog posts, articles and interviews of the top
machine learning researchers (in particular, the highly respected
Yann LeCunn, Yoshua Bengio, and Ben Goertzel) that harshly criticized
your pseudo-scientific claims that AI technology will destroy mankind
with 20-30% probability. Your nonsensical claims put the lives of
above top machine learning researchers in absolute danger, and
Ben Goertzel has received a cold blooded death threat from one of
your aides before. The real probability of such an extraordinary,
extreme event is extremely low a priori, and you have no extraordinary
evidence for it. . .

You have even fooled Elon Musk, and he has been given an international
Luddite Award because of your excessive stupidity and ignorance. . .

[W]e see that your pet wikipedia administrator Silence is affiliated
with MIRI, he keeps editing wikipedia to show you as an important man,
and prevent all criticism with invalid excuses that violate
wikipedia etiquette. However, your attempts to censor all criticisms
of your person, and the creationist imbecile Nick Bostrom, and your
pseudo-scientific, neo-luddite cults called MIRI and FHI are not welcome. . .

[T]he criticisms of Dustin Juliano about AI Eschatology, as well
as the views of Yann LeCun, Yoshua Bengio , and Ben Goertzel ,
as well as my own, were censored on the. . . wikipedia
page and section: Existential risk from advanced artificial intelligence

The foolish, crypto-theist, pseudo-scientific MIRI and FHI members
are censoring criticisms of their page on wikipedia, as they have
obviously infiltrated wikipedia. I have asked for the aid of
fellow AI researchers to contact wikimedia and report this blatant
attempt to censor criticism of pseudo-science [well, that's why
David Gerard started RationalWiki ;-> ].

This is not the first time pseudo-science / new-age religion cults
have tried to silence criticism of their scams. Scientology in the
past has behaved similarly, and like MIRI/FHI/FLI, they also recruited
clueless celebrities and people looking for some action to popularize
their nonsense. . .

jimf said...

I gather that the "Fractal Future Forum" is a fairly recently-formed
(and low-profile) website/wiki for enthusiasts of "futurism",
transhumanism, and the creation of SFnal scenarios.

"Professor J. Moriarty"'s real name is "Peter H. Meadows", according
and he claims to have some real-world expertise in the field of
AI/machine learning. I have not attempted to verify that claim.

"Moriarty" goes on to post the criticisms which he claims were
censored from the Wikipedia article about "AI Eschatology", a document
which also exists in similar (and more readable) form on the blog
of one "Eray Özkural" at

I hadn't heard before about the Luddite award given to Stephen Hawking
and Elon Musk (or of the Information Technology and Innovation Foundation,
for that matter)
(The Future of Life Institute's response is at ).

There's another amusing rant on Mr. Özkural's blog:
"Scams and Frauds in the Transhumanist Community"
and a Facebook article by the same guy
"Why do smart people fall for AI doomsaying nonsense?"

The latter article contains the eye-rolling comment
"I can see why MIRI/FHI/FLI people are offended. They probably understand that
my genius-level criticism has some effect on above-average intelligence people
that they are currently trying to "convert" to their cult."

Oh God, not another self-styled "genius"! :-0

Insult of the day: cis-sapient

And the beat goes on. . .

jimf said...

Move over, Turing Church.
God Is a Bot, and Anthony Levandowski Is His Messenger
Mark Harris

Many people in Silicon Valley believe in the Singularity -- the day
in our near future when computers will surpass humans in intelligence
and kick off a feedback loop of unfathomable change.

When that day comes, Anthony Levandowski will be firmly on the side
of the machines. In September 2015, the multi-millionaire engineer
at the heart of the trade secrets lawsuit between Uber and Waymo,
Google’s self-driving car company, founded a religious organization
called Way of the Future. Its purpose, according to previously unreported
state filings, is nothing less than to “develop and promote the
realization of a Godhead based on Artificial Intelligence.” . . .

(via )

On the other hand,
Why Everyone Is Hating on IBM Watson—Including the People Who Helped Make It
Jennings Brown

. . .

Their marketing and PR has run amok—to everyone’s detriment. . .

IBM Watson is the Donald Trump of the AI industry -- outlandish
claims that aren’t backed by credible data. . .

But wats an "AI", exactly?
Will Mark Zuckerberg ‘Like’ This Column?
Maureen Dowd
SEPT. 23, 2017

. . .

ProPublica broke the news that, until it asked about it
recently, Facebook had “enabled advertisers to direct their pitches
to the news feeds of almost 2,300 people who expressed interest
in the topics of ‘Jew hater,’ ‘How to burn jews,’ or,
‘History of “why jews ruin the world.”’”

Sheryl Sandberg, Facebook’s C.O.O., apologized for this on Wednesday
and promised to fix the ad-buying tools. . .

The Sandberg admission was also game, set and match for Elon Musk,
who has been sounding the alarm for years about the danger of
Silicon Valley’s creations and A.I. mind children getting out of
control and hurting humanity. His pleas for safeguards and regulations
have been mocked as “hysterical” and “pretty irresponsible” by Zuckerberg.

Zuckerberg, whose project last year was building a Jarvis-style A.I. butler
for his home [wot, not "Robbie" style? ;->], likes to paint himself as
an optimist and Musk as a doomsday prophet. But Sandberg’s comment shows
that Musk is right: The digerati at Facebook and Google are either being
naïve or cynical and greedy in thinking that it’s enough just to have
a vague code of conduct that says “Don’t be evil,” as Google does.

As Musk told me when he sat for a Vanity Fair piece: “It’s great when the
emperor is Marcus Aurelius. It’s not so great when the emperor is Caligula.”

In July, the chief of Tesla and SpaceX told a meeting of governors that
they should adopt A.I. legislation before robots start “going down the
street killing people.” In August, he tweeted that A.I. going rogue
represents “vastly more risk than North Korea.” And in September, he
tweeted out a Gizmodo story headlined “Hackers Have Already Started to
Weaponize Artificial Intelligence,” reporting that researchers proved
that A.I. hackers were better than humans at getting Twitter users to
click on malicious links. . .

jimf said...
Will Machines Eliminate Us?
by Will Knight
January 29, 2016

Yoshua Bengio leads one of the world’s preëminent research groups
developing a powerful AI technique known as deep learning. . .

Prominent figures such as Stephen Hawking and Elon Musk have. . .
cautioned that artificial intelligence could pose an existential
threat to humanity. Musk and others are investing millions
of dollars in researching the potential dangers of AI, as well
as possible solutions. But the direst statements sound
overblown to many of the people who are actually developing
the technology. Bengio, a professor of computer science at
the University of Montreal, put things in perspective in an
interview with MIT Technology Review’s senior editor for AI
and robotics, Will Knight. . .

> Did you ever think you’d have to explain to people that
> AI isn’t about to take over the world? That must be odd.

It’s certainly a new concern. For so many years, AI has
been a disappointment. As researchers we fight to make the
machine slightly more intelligent, but they are still so stupid.
I used to think we shouldn’t call the field artificial intelligence
but artificial stupidity. Really, our machines are dumb,
and we’re just trying to make them less dumb.

Now, because of these advances that people can see with demos,
now we can say, “Oh, gosh, it can actually say things in English,
it can understand the contents of an image.” Well, now we
connect these things with all the science fiction we’ve seen
and it’s like, “Oh, I’m afraid!” . . .

The thing I’m more worried about, in a foreseeable future, is
not computers taking over the world. I’m more worried about
misuse of AI. Things like bad military uses, manipulating people
through really smart advertising; also, the social impact,
like many people losing their jobs. Society needs to get
together and come up with a collective response, and not
leave it to the law of the jungle to sort things out.

jimf said...

Sic Transit Glorious Guru.
Arthur Janov, 93, Dies; Psychologist Caught World’s Attention With ‘Primal Scream’
OCT. 2, 2017

Arthur Janov, a California psychotherapist variously called
a messiah and a mountebank for his development of primal scream
therapy — a treatment he maintained could cure ailments from
depression and alcoholism to ulcers, epilepsy and asthma,
not to mention bring about world peace — died on Sunday. . .

[The pianist Roger] Williams. . . publicly counted Dr. Janov
“as one of history’s five greatest men (along with Socrates,
Galileo, Freud and Darwin).”

Dr. Janov appeared to concur. Primal therapy, he told an
interviewer. . . was “the most important discovery of the
20th century.” . . .

He also listed homosexuality among the ailments that primal
therapy could “cure,” and continued to list it long after the
American Psychiatric Association declassified it as a psychiatric
disorder in 1973. . .

Primal therapy was in many ways of a piece with its time. The
quest for happiness amid postwar suburban anomie had already
spawned Dianetics, the metaphysical movement first propounded
in 1950 by L. Ron Hubbard, who four years later rebranded it
as Scientology.

The ’60s counterculture saw the birth of the human potential movement,
with its promises of enlightened personal fulfillment. The ’70s
would see the advent of EST, the set of self-improvement seminars
established in 1971. . .

[B]ook critic Robert Kirsch sounded an admonitory note about
["The Primal Scream"'s] “hyperbole” and “evangelic certainty.” . . .

Psychologists. . . cited. . . the unverifiability of its central claim. . .
and the lack of independent, controlled studies demonstrating
the therapy’s effectiveness.

But the rhapsodic public endorsement of [celebrities]. . .
caused “The Primal Scream” to be heard round the world. . .

A 2006. . . survey of more than 100 “leading mental health
professionals”. . . found primal therapy to be “certainly discredited” —
together with treatments including angel therapy, crystal healing,
past-lives therapy, future-lives therapy and post-alien-abduction therapy. . .

If Dr. Janov’s work was considered marginal by mainstream psychology,
it appeared over time to have been marginalized by the publishing industry
as well. Where his earlier books. . . were issued by major publishers,
his later ones were brought out primarily by small presses, vanity presses
and print-on-demand houses. . .

[In] Dr. Janov’s most recent book. . . “Beyond Belief: Cults, Healers,
Mystics and Gurus — Why We Believe,”. . . he wrote: “Individuals whose
agonies have no rhyme or reason, whose barely contained desperation impels
them to search for magic, badly need bearers of good tidings. Enter the
Dr. Feelgoods, who promise hope against hopelessness, help against
helplessness, whose incantations calm, soothe and relieve. . .

Neurosis and psychosis have us believing that quartz crystals can make
a sick person well; that by humbling yourself and giving yourself over
to a higher power, you can follow 12 steps to salvation; that a
greedy charlatan who wears white robes holds the keys to wisdom; that
the rantings of a self-appointed messiah are God’s truth. . .”


jimf said...

> Calling a device "artificially intelligent" has never once made it so.
> 12:44 PM - Oct 2, 2017
> [T]here has been a giant transfer of time, attention, and resources
> from reality to fantasy. Rather than pursuing the American dream,
> people are simply dreaming. . .

"Four Years Later"
Date: Fri Apr 19 2002

The date is April 19, 2006 and the world is on the verge of something
wonderful. The big news of the last twelve months is the phenomenal success
of Ben Goertzel's Novamente program. It has become a super tool for solving
complex problems. . . "[M]iracle" cures for one major disease after
another are being produced on almost a daily basis. . .
[T]he success of the Novamente system has made
Ben Goertzel rich and famous making frequent appearances on the talk show
circuit as well as visits to the White House. One surprise is the fact that
the System was unable to offer any useful advise to the legal team that
narrowly fended off the recent hostile take over attempt by IBM. The
Novamente phenomen[on] has triggered an explosion of public interest and
research in AI. Consequently, the non-profit organization The Singularity
Institute for Artificial Intelligence has been buried under an avalanche of
donations. In their posh new building in Atlanta we find Eliezer working
with the seedai system of his own design. . .

Tick tock, tick tock.

"Frequent appearances on the talk show circuit" -- that's funny.
That's what 15 years'll do. 2002 was prior to YouTube.

Also prior to the "alt-right":
(via )

jimf said...
‘Artificial Intelligence’ was 2016's fake news
Putting the 'AI' into FAIL
By Andrew Orlowski
2 Jan 2017

. . .

There’s a cultural gulf between AI’s promoters and the
public that Asperger’s alone can’t explain. There’s no polite
way to express this, but AI belongs to California’s inglorious
tradition of generating cults, and incubating cult-like thinking.
Most people can name a few from the hippy or post-hippy years – EST,
or the Family, or the Symbionese Liberation Army – but actually,
Californians have been it at it longer than anyone realises.

Today, that spirit lives on Silicon Valley, where creepy billionaire
nerds like Mark Zuckerberg and Elon Musk can fulfil their desires
to “play God and be amazed by magic” . . .


jimf said...
AI in Medicine? It's back to the future, Dr Watson
Why IBM's cancer projects sounds like Expert Systems Mk.2
By Andrew Orlowski
25 Sep 2017

. . .

AI is always "improving" – as much is implied by the cleverly
anthropomorphic phrase, "machine learning". Learning systems don't
get dumber. But what if they don't actually improve?

The caveat accompanies almost any mainstream story on machine learning
or AI today. But it was actually being expressed with great
confidence forty years ago, the last time AI was going to
"revolutionise medicine".

IBM's ambitious Watson Health initiative will unlock "$2 trillion of value,"
according to Deborah DiSanzo, general manager of Watson Health at IBM.

But this year it has attracted headlines of the wrong kind. In February,
the cancer centre at the University of Texas put its Watson project on
hold, after spending over $60m with IBM and consultants
PricewaterhouseCoopers. . .

Given how uncanny it is that so much of today's machine learning mania
echoes earlier hypes, let's take a step back and examine the fate of one
showpiece Artificial Intelligence medical system, and see if there's
anything we can learn from history. . .

The history of AI is one of long "winters" of disinterest punctuated by
brief periods of hype and investment. Developed by Edward Shortliffe,
MYCIN was a backward-chaining system designed to help clinicians that
emerged early on in the first "AI winter". . .

MYCIN used AI to identify the bacteria causing infections, and based
on information provided by a clinician, recommended the correct dosage
for the patient.

MYCIN also bore the hallmarks of experience. The first two decades of
AI had been an ambitious project to encode all human knowledge in
symbols and rules, so they could be algorithmically processed by
a digital computer. Despite great claims made on its behalf, this
had yielded very little of use. Then in 1973, the UK withdrew funding
for AI from all but three UK universities. The climate had gone
cold again.

AI researchers were obliged to explore new approaches. . .
Micro-worlds, artificially simple situations, were one approach. . .
From Micro-worlds came rules-based "expert systems". MYCIN was such
a rules-based system. Comprising 150 IF-THEN statements, MYCIN made
inferences from a limited knowledge base. . .

Stanford had compared MYCIN to the work of eight experts at Stanford
medical school. Out in the real world it was deemed unfit for purpose,
and went unused.

MYCIN's Achilles' heel was predicted in advance by the leading AI critic
(and tormentor) Hubert Dreyfus. Not all knowledge can be finessed into "rules". . .

AI today is very different to the AI of the '60s, '70s and '80s:
probabilistic AI takes a brute force approach, using large data sets.
In some cases, such as speech recognition, this has been fantastically
successful. In others it's still quite useful. But in many other situations,
it isn't. . .

Today we're in another Tulip Mania phase of AI: Softbank's singularity-obsessed
boss has pledged that much of his $100bn tech fund will focus on
AI and ML investments. . .

jimf said...

Meanwhile, Dan Brown introduces Da Vinci Code readers to the

Whoever You Are.
Whatever You Believe.
Everything Is About To Change.

Ride the E-Wave!

Not Watson. Winston!

jimf said...

> Today we're in another Tulip Mania phase of AI: Softbank's singularity-obsessed
> boss has pledged that much of his $100bn tech fund will focus on
> AI and ML investments. . .
Masayoshi Son’s Grand Plan for SoftBank’s $100 Billion Vision Fund
OCT. 10, 2017

. . .

The Japanese billionaire [Masayoshi Son, 60, the head of the Japanese conglomerate SoftBank]
said he believed robots would inexorably change the work force and machines would
become more intelligent than people, an event referred to as the “Singularity.” . . .

SoftBank and its Vision Fund have invested billions of dollars in a seemingly
random sample of more than two dozen companies since the fund was announced. . .

Yet the companies all have something in common: They are involved in collecting
enormous amounts of data, which are crucial to creating the brains for the
machines that, in the future, will do more of our jobs. . .

In a speech last month in New York
Mr. Son declared that in 30 years, there would be as many sentient robots
on Earth as humans and that those robots, which he called metal collar workers,
would fundamentally change the labor market.

“Every industry that mankind ever defined and created, even agriculture, will
be redefined,” Mr. Son said. “Because the tools that we created were inferior
to mankind’s brain in the past. Now, the tools have become smarter than
mankind ourselves.” . . .

Blade Runner 204... uh, 7?

So, will they make babies too? (Full metal collar?)
Inquiring minds want to know!