Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All
Tuesday, September 05, 2017
There Is No AI
Nothing that is being called "AI" these days is actual AI -- which is
not to deny how dangerous what passes for AI these days happens to be. The threat of "AI" today is entirely the threat of intelligent designers,
owners, and users abusing computers in predictably unscrupulous and reckless
ways. Elon Musk and his ilk are not so much warning us of the dangers of AI as they are profitably indulging in them while distracting the marks with shiny sfnal objects. It may be useful to recall "AI" discourse has its (1) robocultic True Believers and ideologues for whom AI cannot fail, only be failed; its (2) opportunistic evangelical hucksters/VC tech-types out to rationalize parochial tech profits with hyped promises and threats; and its (3) many ignorant, opportunistic tech-infotainment fluffers in the advertorial press and in various consumer fandoms. These constituencies overlap, supplement, complement one another, provide wiggle room and contexts for one another (most discourses and organized movements exhibit this sort of complexity and dynamism, AI discourse is no different).
Subscribe to:
Post Comments (Atom)
3 comments:
https://www.theatlantic.com/magazine/archive/2017/09/how-america-lost-its-mind/534231/
https://www.theatlantic.com/national/archive/2017/08/radio-atlantic-kurt-andersen-on-how-america-lost-its-mind/536532/
It's quite a compendium. I sat down in the bookstore to read one chapter:
Chapter 37, "The Inmates Running the Asylum Decide Monsters Are Everywhere",
about the Satanism/child sexual abuse/recovered memory/day-care panic of the 80s.
It's a **dense** story, and the hysteria underpinning that echo of the
witch-hunts of earlier centuries is still, I fear, bubbling dangerously just under
the surface of American society.
In a bit of serendipitous irony, right next door to _Fantasyland_, on the
same side of the new arrivals display, was another brand-new book:
_Life 3.0: Being Human in the Age of Artificial Intelligence_
by Max Tegmark
https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598/
This is the second book I've purchased in the past few weeks having to
do with artificial intelligence (in the SFnal gee-whiz sense of the phrase).
The earlier one was fiction:
_After On: A Novel of Silicon Valley_
by Rob Reid
https://www.amazon.com/After-Silicon-Valley-Rob-Reid/dp/1524798053/
which was a lightweight, if modestly entertaining,
thriller-cum-SF novel with a feel-good ending.
The Tegmark book is (nominally) non-fiction, but it begins with a
sketch of what might, I suppose, be turned into a similar
thriller-cum-SF novel with a feel-good ending. Inside one of the
giant Web-era corporations (think Google or Alphabet, I suppose), there's
a super-secret skunkworks (the "Omegas") comprising the world's
brightest AI programmers who realize (in contrast to the rest of the
scientific community, the managers of the parent company, and
the world's government intelligence services, who are kept in the dark)
that they're **this** close to a recursively self-improving strong AI.
The magic maguffin here is that a program that isn't **already** a better-than-human AI
can somehow nevertheless be made to write AI code that works better than a group
of the best human programmers can write. (One is irresistably reminded
of that Sidney Harris cartoon
http://cafehayek.com/wp-content/uploads/2014/03/miracle_cartoon.jpg ).
So the Omegas launch their (carefully-sandboxed and isolated from
the Web) "Prometheus" AI (shades of "Proteus" in Dean Koontz's _Demon Seed_ ;-> ),
and by the time it reaches its 10th self-improved iteration, it's making
money hand-over-fist, first by selling its services to the Amazon Mechanical Turk
( https://www.mturk.com/mturk/help?helpPage=overview ), and later by making
brilliant animated movies faster and more cheaply than any human-run studio
could ever accomplish. Later on, Prometheus becomes an all-wise oracle
providing technological gizmos (manufactured and sold by a world-spanning
network of carefully-disguised shell companies), solving, among other
things, the energy and climate-change problems. It concocts amazingly-effective
educational programs. Its godlike knowledge of human psychology and
human events allows it to manipulate politics, put the bad guys out of power
and the good guys into power, and defuse the threat of nuclear war.
Among its ideological agendas -- take social services out of the hands
of government, but replace them with the largesse of fabulously-profitable
privately-owned companies (controlled ultimately by the Omegas, of course) which,
not being beholden to greedy shareholders, can afford to devote a modest share of profits
to charity.
By the end of this Preface, "The Omegas had now completed the most dramatic
transition in the history of life on Earth. For the first time ever, our planet
was run by a single power, amplified by an intelligence so vast that it could
potentially enable life to flourish for billions of years on Earth and throughout
our cosmos -- but what specifically was their plan?"
;->
So Chapter 1 of Tegmark's book, "Welcome To the Most Important Conversation
of Our Time" contains (p. 33):
The Beneficial-AI Movement
. . .I first met Stuart Russell in a Paris café in June 2014. . .
[H]e [is] one of the most famous AI researchers alive, having authored
the standard textbook on the subject. . . He explained to me how progress
in AI had persuaded him that human-level AGI this century was a real
possibility and, although he was hopeful, a good outcome wasn't guaranteed.
There were crucial questions that we needed to answer first, and they
were so hard that we should start researching them now, so that we'd
have the answers ready by the time we needed them.
Today, Stuart's views are rather mainstream, and many groups around the
world are pursuing the sort of AI-safety research that he advocates. . .
In the past decade, research on such topics was mainly carried out by
a handful of independent thinkers who weren't professional AI researchers,
for example Eliezer Yudkowsky, Michael Vassar and Nick Bostrom. Their
work had little effect on most mainstream AI researchers, who tended to
focus on their day-to-day tasks of making AI systems more intelligent
rather than on contemplating the long-term consequences of success.
Of the AI researchers I knew who did harbor some concern, many hesitated
to voice it out of fear of being perceived as alarmist technophobes.
I felt that this polarized situation needed to change, so that the full
AI community could join and influence the conversation about how to build
beneficial AI. Fortunately, I wasn't alone. In the spring of 2014,
I'd founded a nonprofit called the Future of Life Institute. . .
together with my wife, Meia, my physicist friend Anthony Aguirre,
Harvard grad student Viktoriya Krakovna and Skype founder Jaan Tallinn.
Our goal was simple: to help ensure that the future of life existed and
would be as awesome as possible. . .
There was broad consensus that although we should pay attention to
biotech, nuclear weapons and climate change, our first major goal should
be to make AI-safety research mainstream. My MIT colleague Frank Wilczek,
who won a Nobel Prize for helping figure out how quarks work, suggested
that we start by writing an op-ed to draw attention to the issue
and make it harder to ignore. I reached out to Stuart Russell (whom
I hadn't yet met) and to my physics colleague Stephen Hawking, both
of whom agreed to join me and Frank as co-authors. Many edits later,
our op-ed was rejected by _The New York Times_ and many other U.S.
newspapers, so we posted it on my _Huffington Post_ blog account.
To my delight, Arianna Huffington herself emailed and said, "thrilled to
have it! We'll post at #1!," and this placement at the top of the
page triggered a wave of media coverage of AI safety that lasted for
the rest of the year, with Elon Musk, Bill Gates and other tech
leaders chiming in. Nick Bostrom's book _superintelligence_ came out
that fall and further fueled the public debate. . .
When I left [the January 2015 "The Future of AI: Opportunities and
Challenges" conference in] Puerto Rico, I did so convinced that the
conversation we had there. . . [is] the most important conversation
of our time.[*]
[*] The AI conversation is important in terms of both urgency and impact.
In comparison with climate change, which might wreak havoc in fifty
to two hundred years, many experts expect AI to have greater impact
within decades -- and to potentially give us technology for mitigating
climate change. In comparison with wars, terrorism, unemployment,
poverty, migration and social justice issues, the rise of AI will
have greater overall impact -- indeed, we'll explore in this book
how it can dominate what happens with all these issues, for better or
for worse.
====
I'll try to choke down the rest of the book. I did buy it,
after all. I can always cast side glances at
https://reddragdiva.tumblr.com/tagged/the-crackpot-offer-indeed
;->
Robot Gods, genetic enhancement and longevity, artificial meat, virtuality blah blah blah blah, the terms never changing, the promises and skeery threats never happening, always oh so very important to talk about, year after year after year as privileged mediocrities game the economy and political system in the most boring serially failed utterly predictably idiotic ways. I could re-run the first ten years of this blog, just changing the names of the latest tech soopergeniuses as they make exactly identically stupid claims and I would presumably resume my place as incendiary tech critic. I definitely hear you when you say you'll choke the latest drivel down. My righteous rage, and the pleasure I once took in ridiculing tech hucksters, has long since been eclipsed by demoralization. Trump's America is the futurological future, a shriveled white dick with a megaphone peddling late-nite infomercials over a stinking landfill under a gray snowfall of cremated ashes.
Post a Comment