Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Saturday, April 01, 2017


Acrid Oracle is an anagram of Dale Carrico.


jimf said...

> . . . Acrid Oracle . . .

Now see, that's the kind of thing a contemporary "AI"
**can** do. Permute all the letters, and then consult
a dictionary to see which substrings are real


Dale Carrico said...

Talking about AI all these years has rendered me artificially imbecilent at last...

jimf said...

> Talking about AI all these years has rendered me
> artificially imbecilent at last...

Don't fret. Our wits will be refurbished as soon
as we get our own AIs to talk **to**!

> >
> >
> > The Outline: When Machines Go Rogue
> >
> > . . . the jet hit the frozen ground with the velocity
> > of a .45 caliber bullet. . .
> Of course, this real-life autopilot malfunction, as
> tragic as its consequences were, still lacks the main
> maguffin of an "AI thriller"
When is AI appropriate?
July 11, 2016
Cathy O'Neil

I was invited last week to an event co-sponsored by the
White House, Microsoft, and NYU called AI Now: The social
and economic implications of artificial intelligence technologies
in the near term.

Before I talk about some of the ideas that came up, I want to
mention that the definition of “AI” was never discussed. After
a while I took it to mean anything that was technological that
had an embedded flow chart inside it. So, anything vaguely
computerized that made decisions. Even a microwave that automatically
detected whether your food was sufficiently hot – and kept
heating if it wasn’t – would qualify as AI under these rules. . .

A killer microwave. No, I don't think that would cut the
mustard as an AI thriller maguffin either. It might be suitable
for a supernatural thriller -- like that demon-possessed
floor lamp in Amityville 4 - The Evil Escapes (with Patty Duke
and Jane Wyatt, no less) ;->

(Hey, was that a microwave that got Jane Wyatt's parrot?
No, I guess it was a toaster oven.)

jimf said...

> . . . artificially imbecilent . . .
btw, the quality MIRI sneer culture fodder is now at

in which we see rationalists™ expound upon the AI safety implications
of how those vile transgenders will PAPERCLIP US ALL!!!!
(and oh god the discussion)

and the rationalists were doing so well with transgender issues up
to now. turns out they’re fake goths

"the rationalists were doing so well with transgender issues up
to now"? I guess that means Michael Anissimov never counted
as a rationalist™.

There was a Twitter war a few years ago, tagged "#Trannygate",
between our old pal Michael and NRx fellow-traveller
Bryce Laliberte over the latter's daring to consort with
transgender Google programmer Justine Tunney
and cf. stuff I quoted in comment thread of ).

But what could the T in LGBT possibly have to do with artificial intelligence?


(via )
Why "gender identity" and trans activism could literally destroy the world

. . .

[H]umans are a mess of conflicting desires inherited from our evolutionary
and sociocultural history; we don't have a utility function written down
anywhere that we can just put in the AI. So if the systems that ultimately
run the world end up with a utility function that's not in the incredibly
specific class of those we would have wanted if we knew how to translate
everything humans want or would-want into a utility function, then the
machines disassemble us for spare atoms and tile the universe with
something else. . .

the bad epistemic hygiene habits of the trans community that are
required to maintain the socially-acceptable alibi that transitioning is
about expressing some innate "gender identity", are necessarily spread
to the computer science community, as an intransigent minority of trans
activist-types successfully enforce social norms mandating that everyone
must pretend not to notice that trans women are eccentric men. With
social reality placing such tight constraints on perception of actual
reality, our chances of developing the advanced epistemology needed to
rise to the occasion of solving the alignment problem seem slim at best. . .

Uh **huh**.

jimf said...

Boku de Roko (David Gerard)
the other roko’s basilisk

> there’s a novella called roko’s basilisk which someone wrote
> and put up on kindle. . .

just finished it. . . it’s a quick psychological horror short.
basically it takes the concepts behind roko’s basilisk and puts
them into story form. “roko” plays both yudkowsky and roko and
explains the killing meme to his not-as-brilliant friend.
in this world “friendly ai” is a term used in real ai research
(rather than something that gets real ai researchers punching walls
harder than chemists do at “nanobots”). “roko” has solved
Coherent Extrapolated Volition or something close enough for
a scifi handwave. . .

Ehh. . . I'm reassimilating _Neuromancer_ in audiobook form.
And I think I'll listen to the BBC radio play after that.

I used to be able to buy single wrapped pieces of Ting Ting Jahe
candied ginger at a deli down the street from where I worked.
Nowadays I can order a bag of it on Amazon if I want.
Trying to keep the sugar consumption under control, though. ;->

jimf said...

Loc. cit

> I swear to god, if I hear another pasty wight boi wring their
>hands together about The Coming SuperIntelligence™…
> As if we already don’t have perfectly stupid sub-intelligent algorithms
> ruining lives, causing destruction. But those algorithms are owned
> by wight people, so that’s apparently okay.
> It’s like wight people — or, really, wight bois — are secretly terrified
> that their malevolent rule will be supplanted by beings that are just
> as cruel as them. . .

> ---------------
> Joe Rogan and Lawrence Krauss on artificial intelligence
> Krauss: AI researchers [say] -- and I find
> this statement almost vacuous, but I'm amazed that they use it all
> the time -- . . . program machines with "human values". . .
> [A] very smart guy. . . said to me, "well, they just have to watch us." And I
> said, "What do you mean -- they watch Donald Trump and they know what
> human values are?" I mean -- come on!

Or our AI pupils could watch these guys:
Jerks and the Start-Ups They Ruin
APRIL 1, 2017

. . .

[T]he real problem with tech bros is not just that they’re
boorish jerks. It’s that they’re boorish jerks who don’t know
how to run companies.

Look at Uber, the ride-hailing start-up. . . The company’s woes
spring entirely from its toxic bro culture, created by its
chief executive, Travis Kalanick.

What is bro culture? Basically, a world that favors young men
at the expense of everyone else. A “bro co.” has a “bro” C.E.O.,
or C.E.-Bro, usually a young man who has little work experience
but is good-looking, cocky and slightly amoral — a hustler. . .

Bro cos. become corporate frat houses, where employees are chosen
like pledges, based on “culture fit.” Women get hired, but they
rarely get promoted and sometimes complain of being harassed.
Minorities and older workers are excluded.

Bro culture also values speedy growth over sustainable profits,
and encourages cutting corners, ignoring regulations and doing
whatever it takes to win.

Sometimes it works. But often the whole thing just flames out. . .

Imagine the future Bro-bot God. Gets the whole human race drunk,
and then sends drone cameras scurrying about taking pictures up women's

jimf said...

> Imagine the future Bro-bot God.

Or, alternatively, we could get an AI Overlord acculturated
as that bane of all libertechbrotarians, the Social Justice Warrior.

In fact, Google is working on that one as we speak:
Google Training Ad Placement Computers to Be Offended
APRIL 3, 2017

MOUNTAIN VIEW, Calif. — Over the years, Google trained computer systems
to keep copyrighted content and pornography off its YouTube service.
But after seeing ads from Coca-Cola, Procter & Gamble and Wal-Mart
appear next to racist, anti-Semitic or terrorist videos, its engineers
realized their computer models had a blind spot: They did not understand

Now teaching computers to understand what humans can readily grasp
may be the key to calming fears among big-spending advertisers that
their ads have been appearing alongside videos from extremist groups
and other offensive messages.

Google engineers, product managers and policy wonks are trying to
train computers to grasp the nuances of what makes certain videos
objectionable. . .

_South Park_ gave us Mecha-Streisand. Here's a nightmare meme for the
libertechbrotarians exponentially worse than Roko's Basilisk:




jimf said...

> ----------------
> Why "gender identity" and trans activism could literally destroy the world
> . . .
> [H]umans are a mess of conflicting desires inherited from our evolutionary
> and sociocultural history; we don't have a utility function written down
> anywhere that we can just put in the AI.
The Changeling
Original Airdate: 29 Sep, 1967

KIRK: . . . Lieutenant. Lieutenant, are you all right?

(Uhura just gazes blankly ahead.)

KIRK: Sickbay. What did you do to her?

NOMAD: That unit is defective. Its thinking is chaotic. Absorbing it unsettled me.

SPOCK: That unit is a woman.

NOMAD: A mass of conflicting impulses.


jimf said...

From your Twitter feed:
DNA isn't mere code -- it's dynamic. Scientists describe it with words
like "orchestration," "choreography," "dance"

Computer programmers unbellyfeel the molecular dance that is life.

And **nervous systems** -- all of 'em, not just the
Human Brain (insert b'rakah, genuflect) -- pile levels of
**inter**cellular dynamism on top of the **intra**cellular
DNA'n'metabolism disco.

I'm reminded of some discussions I weighed in on 16 ( :-0 )
years ago on the old Extropians' mailing list. (It's
2017 -- do you know where your Singularity is?!)
Re: Keeping AI at bay (was: How to help create a singularity)
May 06 2001 wrote:

> [C]urrent early precursors of reconfigurable hardware (FPGAs)
> seem to generate extremely compact, nonobvious solutions even
> using current primitive evolutionary algorithms.

But at some point the evolution stops
(when the FPGA is deemed to have solved the problem), the chip is plugged
into the system and switched on, and becomes just another piece of
static hardware. Same with neural networks -- there's a training set
corresponding to the problem domain, the network is trained on it,
and then it's plugged into the OCR program (or whatever), shrink-wrapped,
and sold.

Still too static, folks, to be a basis for AI. When are we going to have
hardware with the sort of continual plasticity and dynamism that nerve tissue has?
(I know it's going to be hard. And, in the meantime, evolved FPGAs
might have their uses, if people can trust them to be reliable). . .


[ ]

James Rogers wrote:

> Give me just one example of something you can do in high-plasticity
> evolvable hardware that can't be done in software.

Give **me** an example of just one out of the trillions of instances
of high-plasticity evolvable hardware runnning around on this
planet that's been successfully replicated in software!
Re: Contextualizing seed-AI proposals
Apr 14 2001

> Intelligence ("problem-solving", "stream of consciousness")
> is built from thoughts. Thoughts are built from structures
> of concepts ("categories", "symbols"). Concepts are built from
> sensory modalities. Sensory modalities are built from the
> actual code.

Too static, I fear. Also, too dangerously perched on
the edge of what you have already dismissed as the "suggestively-
named Lisp token" fallacy.

Fee, fie, foe, fum.
Cogito, ergo sum. . .

> [W]hen the FPGA is deemed to have solved the problem, the chip is plugged
> into the system and switched on, and becomes just another piece of
> static hardware. . .

Yeah, this is like what happens to Deep Learning (TM) neural networks,
after they're trained:
Google Chases General Intelligence With New AI That Has a Memory
Shelly Fan
Mar 29, 2017

[A]rtificial neural networks like Google’s DeepMind learn to master
a singular task and call it quits. To learn a new task, it has to reset,
wiping out previous memories and starting again from scratch.

This phenomenon, quite aptly dubbed “catastrophic forgetting,”
condemns our AIs to be one-trick ponies. . .


Shelly Xuelai Fan is a neuroscientist at the University of California,
San Francisco, where she studies ways to make old brains young again.
In addition to research, she's also an avid science writer with an
insatiable obsession with biotech, AI and all things neuro. . .

I wonder how old Ms. Fan was in 2001.

jimf said...

> I'm reminded of some discussions I weighed in on 16 ( :-0 )
> years ago on the old Extropians' mailing list. (It's
> 2017 -- do you know where your Singularity is?!) . . .
> I wonder how old Ms. Fan was in 2001.

Oldthinkers unbellyfeel. . .
Old Mice Made Young Again With New Anti-Aging Drug
by Shelly Fan
Apr 05, 2017

. . .

[A] collaborative effort between the Erasmus University in the
Netherlands and the Buck Institute for Research on Aging in California
may have a solution. Published in the prestigious journal Cell,
the team developed a chemical torpedo that, after injecting into mice,
zooms to senescent cells and puts them out of their misery, while
leaving healthy cells alone. . .

I guess this isn't the same thing as got the Young Turks excited
a few days ago:
Harvard Scientists REVERSE Aging In Mice. People Next...
The Young Turks
Mar 26, 2017

Dr. David Sinclair, from Harvard Medical School, and his colleagues
reveal their new findings in the latest issue of Science. They focused
on an intriguing compound with anti-aging properties called
NAD+, short for nicotinamide adenine dinucleotide. . .

No mention by the Turks of the hoopla a decade ago about resveratrol
and SIRT1 activators.

Me, I'm betting on the Peter Thiel (and Eldritch Palmer)
page-out-of-Count Dracula approach ;-> .
( )

Hey, does Ray Kurzweil get blood changes these days, or is
he still just gobbling supplements (including NAD+ ?) and getting his biomarkers
measured by Dr. Terry Grossman? Inquiring minds. . . Well, come to
think, I'm not sure I **do** want to know. :-0

jimf said...

> on the old Extropians' mailing list. . .
> ---------
> Old Mice Made Young Again With New Anti-Aging Drug

Geez, remember Doug Skrecky and his fruit flies?

Apparently somebody does:
Stem Cell life extension formulas. Doug Skrecky
fruit fly, longevity, anti aging, life extension
Scott Rauvers
Apr 17, 2016

jimf said...

To paraphrase a Great Man: "Nobody knew the world
could be so complicated."

To Curb Global Warming, Science Fiction May Become Fact
Eduardo Porter
APRIL 4, 2017
Remember “Snowpiercer”? . . .

[A]n attempt to engineer the climate and stop global warming
goes horribly wrong. The planet freezes. Only the passengers
on a train endlessly circumnavigating the globe survive.
Those in first class eat sushi and quaff wine [like Tilda Swinton ].
People in steerage eat cockroach protein bars.

Scientists must start looking into this. Seriously. . .

Let’s get real. The odds that these processes could be slowed,
let alone stopped, by deploying more solar panels and wind turbines
seemed unrealistic even before President Trump’s election.
It is even less likely now that Mr. Trump has gone to work
undermining President Barack Obama’s strategy to reduce
greenhouse gas emissions.

That is where engineering the climate comes in. . .

[T]he research agenda must include an open, international debate
about the governance structures necessary to deploy a technology that,
at a stroke, would affect every society and natural system in the
world. In other words, geoengineering needs to be addressed not
as science fiction, but as a potential part of the future just a
few decades down the road.

“Today it is still a taboo, but it is a taboo that is crumbling,” . . .

Arguments against geoengineering are in some ways akin to those
made against genetically modified organisms and so-called Frankenfood. . .

[H]ow could the world agree on the deployment of a technology
that will have different impacts on different countries? How could
the world balance the global benefit of a cooling atmosphere
against a huge disruption of the monsoon on the Indian subcontinent?
Who would make the call? Would the United States agree to this
kind of thing if it brought drought to the Midwest? Would Russia
let it happen if it froze over its northern ports?

Geoengineering would be cheap enough that even a middle-income
country could deploy it unilaterally. . .

“The biggest challenge posed by geoengineering is unlikely to be
technical, but rather involve the way we govern the use of this
unprecedented technology.” . . .

People should keep in mind the warning by Alan Robock, a
Rutgers University climatologist, who argued that the worst case
from the deployment of geoengineering technologies might
be nuclear war. . .

Geeee oh, oh geee oh.

Old worms of yesterday. . . unbellyfeel. . . THE WORMHOLE!!!

All I want is to be in his movie. . .


jimf said...
let none say phyg [that's the rot13 encoding of "cult"]

03 April 2017
[ ]

> A guy I know, who works in one of the top M[achine]L[earning] groups,
> is literally less worried about superintelligence than he is about
> getting murdered by rationalists. That’s an extreme POV. Most researchers
> in ML simply think that people who worry about superintelligence are
> uneducated cranks addled by sci fi.
> I hope everyone is aware of that perception problem.

05 April 2017
[ ]

> Are you describing me? It fits to a T except my dayjob isn’t ML.
> I post using this shared anonymous account here because in the past
> when I used my real name I received death threats online from
> L[ess]W[rong] users. In a meetup I had someone tell me to my face
> that if my AGI project crossed a certain level of capability,
> they would personally hunt me down and kill me. They were quite serious.
> I was once open-minded enough to consider AI x-risk seriously.
> I was unconvinced, but ready to be convinced. But you know what?
> Any ideology that leads to making death threats against peaceful,
> non-violent open source programmers is not something I want to let
> past my mental hygiene filters.
> If you, the person reading this, seriously care about AI x-risk,
> then please do think deeply about what causes this, and ask youself
> what can be done to put a stop to this behavior. Even if you haven’t
> done so yourself, it is something about the rationalist community which
> causes this behavior to be expressed.
> . . .
> I would be remiss without layout out my own hypothesis. I believe
> much of this comes directly from ruthless utilitarianism and the
> “shut up and multiply” mentality. It’s very easy to justify murder
> of one individual, or the threat of it even if you are not sure you’d
> carry it through, if it is offset by some imagined saving of the world.
> The problem here is that nobody is omniscient, and AI x-riskers are
> willing to be swayed by utility calculations that in reality have
> so much uncertainty that they should never be taken seriously. . .

jimf said...

> ---------------
> let none say phyg [that's the rot13 encoding of "cult"]

Back in 2004, one Michael Wilson had materialized as an insider
in SIAI. . . circles. . . At one point, he made a post
[on the S(hock)L(evel)4 mailing list (an Eliezer Yudkowsky-owned forum)]
in which he castigated himself. . . for having "almost destroyed
the world last Christmas" as a result of his own attempts to "code an AI",
but now that he had seen the light (as a result of SIAI's propaganda) he
would certainly be more cautious in the future. (Of course, no
one on the list seemed to find his remarks particularly
outrageous. . .) . . . I sincerely hope that we can solve these problems
[of AI "Friendliness"], stop Ben Goertzel and his army of evil clones
(I mean emergence-advocating AI researchers :) and engineer the apotheosis. . .

( )

The smiley in the above did not reassure me.
In the **absolute worst case** scenario I can imagine,
a genuine lunatic F[riendly]AI-ite will take up the Unabomber's
tactics, sending packages like the one David Gelernter
got in the mail.
[Ben Goertzel wrote on LessWrong]: After I wrote that blog post
["The Singularity Institute's Scary Idea" ],
Michael Anissimov -- a long-time SIAI staffer and zealot whom I
like and respect greatly -- told me he was going to write up and
show me a systematic, rigorous argument as to why “an AGI not built
based on a rigorous theory of Friendliness is almost certain to
kill all humans” (the proposition I called “SIAI’s Scary Idea”).
But he hasn’t followed through on that yet -- and neither has
Eliezer or anyone associated with SIAI. . .

jimf said...

> It’s very easy to justify murder of one individual, or the threat
> of it even if you are not sure you’d carry it through, if it is
> offset by some imagined saving of the world.

I wrote to one of these folks, back in 2003
(via ):

> . . .I think it's important for you to understand its implications
> (though I have little hope that you will).
> If the Singularity is the fulcrum determining humanity's
> future, and **you** are the fulcrum of the Singularity,
> the point at which dy/dx -> infinity, the very inflection
> point itself, then **ALL** morality goes out the window.
> You might as well be dividing by zero.
> You could justify **anything** on that basis. . .
> The more hysterical things seem, the more desperate,
> the more apocalyptic, the more the discourse **and**
> moral valences get distorted (a singularity indeed!)
> by the weight of importance bearing down on one human
> pair of shoulders. Which happens to belong to you (what
> a coincidence).
> Don't go there. . . Back slowly away from the precipice.
> Before it's too late.

To which my interlocutor replied:

> > You could justify **anything** on that basis
> No, *you* could justify anything on that basis. I am much more careful
> with my justifications. . .
> Ethics doesn't change as the stakes go to infinity.

So people have gotten death threats. No surprise there, I guess.

At least, as far as I know, nobody has yet **died** as a result
of this nonsense (by their own or somebody else's hand). Which is
more, I guess, than can be said for Scientology (or Mormonism).

jimf said...

I notice that one of the commenters in the thread at
is one "Dagon".

I wonder if this is the same "Dagon" who was an occasional commenter
here back in '09 (and who got a special mention in ).

Likely enough, I suppose -- the "Dagon" in the OpenAI thread
on LW has been posting there for at least a decade (posts from back
in '07 and the recent comment link to the same LW user overview).
[Dagon wrote, in an excerpt from a comment on Giulio Prisco's
blog] It is frustrating to know that I whereas feel as secure
in my h+ist convictions as I can possibly be, and it will take
decades to have him eat his shoe. It would be very amusing to
have a singularity in 2012, if only to read the comments Dale
makes about it. . .

"Four Years Later"
Date: Fri Apr 19 2002

The date is April 19, 2006 and the world is on the verge of something
wonderful. The big news of the last twelve months is the phenomenal success
of Ben Goertzel's Novamente program. It has become a super tool for solving
complex problems. . . "[M]iracle" cures for one major disease after
another are being produced on almost a daily basis. . .
[T]he success of the Novamente system has made
Ben Goertzel rich and famous making frequent appearances on the talk show
circuit as well as visits to the White House. One surprise is the fact that
the System was unable to offer any useful advise to the legal team that
narrowly fended off the recent hostile take over attempt by IBM. The
Novamente phenomen[on] has triggered an explosion of public interest and
research in AI. Consequently, the non-profit organization The Singularity
Institute for Artificial Intelligence has been buried under an avalanche of
donations. In their posh new building in Atlanta we find Eliezer working
with the seedai system of his own design. . .

Any day now. Start tenderizing those shoes. :-/

jimf said...

> > . . .Dale Carrico . . . Acrid Oracle . . .
> Now see, that's the kind of thing a contemporary "AI"
> **can** do. Permute all the letters, and then consult
> a dictionary to see which substrings are real
> words.

As described by Jonathan Swift, almost 300 years ago:
by H. Bruce Franklin
[This essay originally appeared in Encyclopedia of Computer Science
(Nature Publishing Group, 2000)]

. . .

To formulate a coherent history of computers in fiction,
the best place to begin may be Jonathan Swift's Gulliver's Travels,
published in 1726. Swift presents an inventor who has constructed
a gigantic machine designed to allow "the most ignorant Person"
to "write Books in Philosophy, Poetry, Politicks, Law, Mathematicks and Theology."
This "Engine" contains myriad "Bits" crammed with all the words of a language,
"all linked together by slender Wires" that can be turned by cranks,
thus generating all possible linguistic combinations. Squads of
scribes produce hard copy by recording any sequence of words that
seems to make sense. . .

Whatever the source of the human obsession with artificial life and
artificial mind -- whether created by means of clockwork automata, stitching
together parts of corpses and zapping them to life with lightning, or
reciting magic spells to animate clay or marble effigies (Golems or Galateas) --
it really is rather amazing to consider just how old the dream (or the nightmare) is.
Thousands of years old. All bound up with the endlessly fascinating
(and terrifying) border between life and death, the fear of death
(and especially of things that were once alive but are now dead,
or things that look like they might be alive but are really dead),
and ghosts and vampires and all the other furniture of horror literature
and bad dreams.

All well antedating the digital computer. The latest technology just
seems (if you don't think too hard about it) to put the old
fantasies on a new-fangled, "scientific" footing. And to give
overly susceptible folks a new reason to scare themselves into
insomnia. ;->