Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Monday, July 13, 2015

Remember the Extropians?

11 comments:

jimf said...

The unending kaffeklatsch of the usual suspects:

http://lists.extropy.org/pipermail/extropy-chat/

jimf said...

http://lists.extropy.org/pipermail/extropy-chat/2015-July/084811.html
-------------------
[ExI] Future of Humanity Institute at Oxford University £1 million grant for AI

Elon Musk funds Oxford research into machine intelligence
Matt Pickles 1 Jul 2015

The Future of Humanity Institute at Oxford University and the Centre
for the Study of Existential Risk at Cambridge University are to
receive a £1m grant for policy and technical research into the
development of machine intelligence.

This grant will allow Oxford University's Future of Humanity
Institute, part of the Oxford Martin School and Faculty of Philosophy
at the University, to become the world’s largest research institute
working on technical and policy responses to the long-term prospect of
smarter-than-human artificial intelligence.

This growth follows the Institute Director Professor Nick Bostrom's
bestselling book “Superintelligence”, which was endorsed by both Elon
Musk and Bill Gates.

Professor Bostrom said: 'There has much talk recently about the future
of AI. Elon - characteristically - decided to actually do something
about it.
====


"Technical" research, eh? That'll be fun.

But who is Matt Pickles? Oh:

Media Relations Manager at University of Oxford
https://uk.linkedin.com/pub/matt-pickles/9/53b/a92

Cute!

https://www.youtube.com/watch?v=6U9DKUWbmRc

jimf said...

You know, I'm having trouble keep my Future of The Future Institutes straight
these days.

A month ago, the New York Times mentioned:

http://www.nytimes.com/2015/05/21/style/ava-of-ex-machina-is-just-sci-fi-for-now.html
--------------------
. . .

Elon Musk, founder of Tesla, recently donated $10 million to the Future
of Life Institute, an organization that seeks to “mitigate existential
risks facing humanity” from “human-level artificial intelligence.”
====
(via
http://amormundi.blogspot.com/2015/05/but-can-killer-robot-love.html )


Then, on the Extropians' chat list a couple of weeks ago,
we had:


http://lists.extropy.org/pipermail/extropy-chat/2015-July/084811.html
-------------------
. . .

The Future of Humanity Institute at Oxford University and the Centre
for the Study of Existential Risk at Cambridge University are to
receive a £1m [that's USD 1.56 million, at the current exchange rate]
grant for policy and technical research into the
development of machine intelligence.
====


The Future of Humanity Institute I'd heard of before:
http://www.fhi.ox.ac.uk/
That's Nick Bostrom's thing (2005):
https://en.wikipedia.org/wiki/Future_of_Humanity_Institute


The the Future of Life Institute is a different one:
http://futureoflife.org/
Ah yes, this is Max Tegmark's thing (March, 2014; with $$
from Jaan Tallinn)
https://en.wikipedia.org/wiki/Future_of_Life_Institute


I guess Max Tegmark counts for (6.41025641025641 times)
more bucks than NIck Bostrom,
in Musk's view. More "technical", dontcha know.


Tegmark got his toothy smile in the New York Times the other
day. But the interesting remarks in the article came from
a different Institute:

http://bits.blogs.nytimes.com/2015/07/11/the-more-real-threat-posed-by-powerful-computers/
----------------
Artificial Intelligence
The Real Threat Posed by Powerful Computers
by Quentin Hardy
July 11, 2015

In October, Elon Musk called artificial intelligence “our greatest
existential threat,” and equated making machines that think with
“summoning the demon.” In December, Stephen Hawking said
“full artificial intelligence could spell the end of the
human race.” And this year, Bill Gates said he was “concerned
about super intelligence,” which he appeared to think was
just a few decades away.

But if the human race is at peril from killer robots, the problem
is probably not artificial intelligence. It is more likely
to be artificial stupidity. . .

There is little sense among practitioners in the field of
artificial intelligence that machines are anywhere close to
acquiring the kind of consciousness where they could form
lethal opinions about their makers.

“These doomsday scenarios confuse the science with remote
philosophical problems about the mind and consciousness,”
Oren Etzioni, chief executive of the Allen Institute for
Artificial Intelligence, a nonprofit that explores artificial
intelligence, said. “If more people learned how to write
software, they’d see how literal-minded these overgrown
pencils we call computers actually are.”

What accounts for the confusion? One big reason is the way computer
scientists work. “The term ‘A.I.’ came about in the 1950s,
when people thought machines that think were around the corner,”
Mr. Etzioni said. “Now we’re stuck with it.”

It is still a hallmark of the business. Google’s advanced A.I.
work is at a company it acquired called DeepMind. A pioneering
company in the field was called Thinking Machines. Researchers
are pursuing something called Deep Learning, another suggestion
that we are birthing intelligence.
====


Overgrown pencils. Tee hee.

("They [computers] were electric trains you could run in
circles!" -- Ted Nelson).

Hey, wasn't there an early word-processing program called
Electric Pencil?
https://en.wikipedia.org/wiki/Electric_Pencil
And an even earlier one at MIT called Expensive Typewriter
https://en.wikipedia.org/wiki/Expensive_Typewriter

jimf said...

http://ieet.org/index.php/IEET/more/bruere20150715
------------
Transhumanism – The Final Religion?
by Dirk Bruere
Jul 15, 2015

. . .

[U]nlike most other religions they will not be knocking on your door
trying to convert you, nor will they be asking you for money.
====

Well, that's a relief!

;->

Dale Carrico said...

Well, that's a relief!

But of course it's not entirely true either. Techno-transcendental framings of technoscientific change and issues suffuse the popular "tech" press and commercial imagery more generally; robot cultists lurking in more mainstream sfnal fandoms and pop-tech fora make their pitches and post their links to techno-transcendental orgs and canon and fluff the expertise of fellow-faithful; membership organizations and think tanks embedded within techno-transcendental sub(cult)ures do solicit funds and attract grants; at this point singularitarian AI and eugenic transhumanist faithful are indulging in epic logrolling of the Usual Robocultic Suspects via dollar-fat corporate marketing arms masquerading as academic and policy-making institutes.

jimf said...

https://pando.com/2015/07/16/jeffrey-tucker/
---------------
Imagine if the "Uber is a good start" guy turned out to be a crazy racist homophobe
Or don't, because he is
by Paul Carr
July 16, 2015

[T]he most terrifying moment of my visit to the FreedomFest libertarian conference. . .
came during a panel about “hacking the state” where a publisher named Jeffrey Tucker
described his vision for a world where technology has disrupted away all regulations
and laws. . .

Tucker came across as a fully fledged sociopath; someone who would see the world burn
and call it progress. I suggested that Tucker represents a new breed of modern tech-savvy
libertarian, the old racist guard of Libertarian having withered away.

It turns out I was wrong. Not about Tucker being a fucking nut -- in fact, as you’ll
see, he’s far more crazy than I could possibly have imagined -- but rather about him
being a new breed...
====


(the rest of the article costs $$$)

However,


http://www.theguardian.com/technology/2013/nov/25/paul-carr-news-site-nsfw-corp-pando-daily
---------------
Paul Carr's news site NSFW Corp joins with Silicon Valley-backed PandoDaily

After NSFW's financial failure, the tech journalist joins the tech site
PandoDaily – 'the site of record for Silicon Valley'
25 November 2013

Tech journalist and entrepreneur Paul Carr’s last venture was surprisingly old-school.
NSFW Corp, a news site that billed itself as 'the Economist written by the Daily Show',
put out a print magazine – and it even put up a paywall. Despite winning fans,
it didn’t make money. Now Carr and co are off to join tech blog PandoDaily, a move
likely to be met with applause and snickers in the incestuous world of tech hackery. . .

PandoDaily, a two-year-old tech blog, bills itself as "the site of record for
Silicon Valley," has so far dealt in lighter fare and has been accused by some of
pandering to the tech community and the interests of its billionaire backers.
Rival blog Valleywag recently called founder Sarah Lacy “tech’s most loyal sycophant.” . .

Pando has. . . managed to raise money – $3m to date from some of the Valley’s
biggest names, including Marc Andreessen, Netscape founder, and Peter Thiel,
PayPal co-founder and Facebook backer. . .

Carr said he did not see Pando as pandering and said the site had been critical
of the Valley and its backers on numerous occasions. . . ". . .I challenge anyone
to say Pando is too friendly to Silicon Valley,” said Carr. . . “If you want to
see Silicon Valley friendly, go to TechCrunch and see press release after press
release after press release written up by children,” he said. . .

Carr said he expects Pando to start making more waves. . . Pando’s investigative
team would target all the most powerful people in the Valley and challenge them
“when they need challenging,” he said. Some of Pando’s investors
“were going to shit himself” when they heard NSFW’s team was joining Pando,
he added. . .
====

No shit! ;->

jimf said...

> https://pando.com/2015/07/16/jeffrey-tucker/

It does seem conceivably more-than-coincidental that an
author at a (pay-per-view) Web site whose ostensible mission
is to investigate and criticize the rich and powerful of
Silicon Valley (but which is backed by some of those same rich
and powerful) has written a piece savaging a guy who just last
year apparently riled up his fellow libertarians by publishing an
article critical of libertarian "brutalism".

http://libertarianstandard.com/2014/03/18/what-explains-the-brutalism-uproar/

Oh dear. What Would Ayn Rand Do?


> http://bits.blogs.nytimes.com/2015/07/11/the-more-real-threat-posed-by-powerful-computers/
> ----------------
>
> . . .
>
> “These doomsday scenarios confuse the science with remote
> philosophical problems about the mind and consciousness,”
> Oren Etzioni, chief executive of the Allen Institute for
> Artificial Intelligence, a nonprofit that explores artificial
> intelligence. . .
> ====

Could that be "Allen" as in Paul Allen? Sure enough!

https://en.wikipedia.org/wiki/Allen_Institute_for_Artificial_Intelligence
----------------
Oren Etzioni was appointed by Paul Allen in September 2013 to direct the research
at the institute. . .
====

And here I thought Paul Allen was content to collect antique computers
and Star Trek props.

Next up, the William H. Gates III Institute for Singularity Research?

Or maybe the Steven Anthony Ballmer Institute for Terminator Studies?
(Nah, he'll stick to buying sports teams. ;-> )

jimf said...

> And here I thought Paul Allen was content to collect antique computers
> and Star Trek props.

Oh right, I forgot about Vulcan Ventures.
https://en.wikipedia.org/wiki/Vulcan_Inc.

It's the umbrella organization for all the Allen investments.
Including the antique computers and Star Trek props.
Also sports teams. ;->

And The Future!

http://chronopause.com/chronopause.com/index.php/2011/04/19/cryonics-nanotechnology-and-transhumanism-utopia-then-and-now/index.html
--------------
Paul Allen has put $40 million into Tri-Alpha (B11-H fusion start-up).
====


https://en.wikipedia.org/wiki/Tri_Alpha_Energy,_Inc.
---------------
Tri Alpha Energy, Inc. (TAE) is an American company based in Foothill Ranch,
California created for the development of aneutronic fusion power. . .

Tri Alpha Energy is a very secretive company: they have no web site, do not
answer the phone, and operate in a stealth way, not publicly announcing
any improvements nor any schedule for commercial production. However,
they have registered various patents, frequently renewed over the years.
They also regularly publish theoretical and experimental results in
academic journals.

As of 2014, Tri Alpha Energy is said to have hired more than 150 employees
and raised over $140 million, far more than any other private fusion power research
company. Main financing has come from Goldman Sachs and venture capitalists
such as Microsoft co-founder Paul Allen's Vulcan Inc., Rockefeller's Venrock,
Richard Kramlich's New Enterprise Associates, and from various people like
former NASA software engineer Dale Prouty who succeeded George P. Sealy
after his death as the CEO of Tri Alpha Energy. Hollywood actor Harry Hamlin,
astronaut Buzz Aldrin, and Nobel Prize winner Arno Allan Penzias figure among
the board members. It is also worth noting that the Government of Russia,
through the joint-stock company Rusnano, also invested in Tri Alpha Energy in
February 2013, and that Anatoly Chubais, CEO of Rusnano, became a member of
the Tri Alpha board of directors.
====

Is there a joke that starts out "Harry Hamlin, Buzz Aldrin, and Arno Penzias
walked into a bar. . ." ?

Do I see Lady 3Jane Tessier-Ashpool lurking in that private box up there?

jimf said...

> https://en.wikipedia.org
> /wiki/Allen_Institute_for_Artificial_Intelligence
> ----------------
> Oren Etzioni was appointed by Paul Allen in September 2013 to direct the research
> at the institute. . .

When I heard the name "Oren Etzioni" (and heard the skepticism in the Times
quote), I figured he must be an imported academic.

But no, he's an American go-getter.

https://en.wikipedia.org/wiki/Oren_Etzioni
------------
Oren Etzioni is an American entrepreneur and professor of Computer Science and
Executive Director of the Allen Institute for Artificial Intelligence. . .

In May 2005, he founded and became the director of the University [of Washington]'s
Turing Center. The Center investigates problems in data mining, natural language processing,
the Semantic Web and other web search topics. He coined the term machine reading
and he created the first commercial comparison shopping agent.

Etzioni is an entrepreneur who has founded or co-founded several business ventures,
including MetaCrawler (bought by Infospace), Netbot (bought by Excite), and
ClearForest (bought by Reuters). He founded Farecast, a travel metasearch and
price prediction site, which was acquired by Microsoft in 2008. He co-founded
Decide, a company whose website Decide.com helped consumers make buying decisions
using previous price history and recommendations from other users. Decide.com was
bought by eBay in September, 2013. He is also a venture partner at the
Madrona Venture Group. . .
====

Ah, **that** kind of "AI".

"AI. . . entirely focused on building tools. . ."

jimf said...

From a Reddit AskMeAnything.
http://www.reddit.com/r/IAmA/comments/2hdc09/im_oren_etzioni_head_of_paul_allens_institute_for/
-------------
Q. You've been called a singularity skeptic. Do you think at all about an AI
at some point in the future becoming more intelligent than humans?
Do you think it's possible to have a runaway self-improving AI
(intelligence explosion). . . ?

A. The plausible scenario based on my working actively in this field for more
than 25 years is that we will continue to make progress BUT that there's
no runaway intelligence. . .

Q. You don't seem too impressed with the potential of deep learning, even dismissive.
What's wrong with it?

A. Deep learning is an impressive technique for harnessing Moore's Law and
reams of data (BIG DATA) for classification. My point is that there is much
more to intelligence than classifying things!

Q. Can you give a few examples of the intelligence problems that you think [are]
hopeless to formulate into classification/regression/scoring?

A. Yes: - natural language understanding - chess playing (solved but NOT by
classification) - theory formation (like a scientist does) - medical diagnosis
and much more...

Q. What first motivated you to get into the [AI] field?

A. It is one of the most fundamental intellectual problems and it's really,
really hard. I find computers so rigid, so stupid that it's infuriating.
My goal is to fight "artificial stupidity" and to build AI programs that
help scientists, doctors, and regular folks make sense of the world and
the tsunami of information that we all face every day. . .

Q. Could. . . artificial intelligence [have] genuine emotional responses. . .?

A. AI is entirely focused on building tools that help us solve thorny intellectual
problems. Think of an AI program as a super-fancy calculator. So we are not
generally thinking about emotions. . .

Q. Would the IBM computer that played Jeopardy be called intelligent by your metrics?

A. ...Watson has become an IBM brand for any knowledge based activity they do.
The intelligence is largely in their PR department.

Q. Do you think Technological singularity will ever happen?

A. I think it's more likely that an asteroid will strike the earth in the
next 50 years than that we will reach the "singularity". Sorry Ray...

Q. [In a] million years. . . do you think it would happen?

A. Yes. I'm a materialist. . . [T]he brain is composed of an amazing,
organic architecture and wetware but we can figure out how to build something
as powerful over a million years. However, the 100 year apocalyptic visions
are silly.

Q. Do you have any role models or heroes?

A. Paul Allen is a huge hero of mine for his intellectual passion and scientific
philanthropy that funds us, the brain science institute and more. Also, he owns
the Seattle Seahawks and they are kicking ass!
====

Smart answer.

jimf said...

> Think of an AI program as a super-fancy calculator.

You know, by **calling** what he's doing "AI" (or by Paul Allen
calling the outfit Etzioni's running the "Allen Institute for
Artificial Intelligence"), Etzioni (and Allen) are **still** tapping
into the current hype surrounding "AI" -- the 90s pre-millennial Extropian
and SFnal enthusiasm (and doomsday prophecies) that have been
picked up over the past decade and a half by the mainstream media,
movies, and (at last! ;-> ) by Very Serious People like Musk
and Gates (and Stephen Hawking). (Bill Joy was way ahead of the
curve. ;-> ).

Of course, if there's a public backlash against "AI" in ten years
(as there was in the 70s after the hype surrounding it in the 50s and 60s --
but this time it'll be a much broader and more popular backlash
than just little-known government funding agencies like DARPA becoming
disillusioned by unrealized promises), then he (and folks like him)
will presumably just switch to calling it (whatever "it" is)
something else. He's already disavowed any connection with the
Singularitarians, and despite the "AI" moniker he's
explicitly disavowed any pursuit of the usual SFnal
notions of AI (though again, just by using the term
he's hedging his bets).

When Cyc ( https://en.wikipedia.org/wiki/Cyc )
dragged on and on without showing much in the way
of results after having been hyped for years in places like
the Sunday Times magazine, Lenat ended up claiming
he'd never been trying to do AI in the first place, though that's
what his project had been billed as in the popular
media. I seem to recall Henry Markram (of the Human Brain
[ne Blue Brain]) project making similar disclaimers not too
long ago (probably last year, when that infamous Open Letter
was published).

I can't complain too much. People with Ideas have to tickle the
fancy of whoever's paying the bills.