Last month I spent a few weeks in correspondence with an interesting writer and occasional journalist who stumbled upon some transhumanist sub(cult)ures and wanted to publish an expose in a fairly high-profile tech publication. She is a congenial and informed and funny person, and I have no doubt she could easily write a piece about futurology quite as excoriating as the sort I do, but probably in a more accessible way than my own writing provides. I was rather hoping she would write something like that -- and I suspect she had drafts that managed the trick -- but the published result was a puff-piece, human interest narratives of a handful of zany robocultic personalities, that sort of thing, and ended up being a much more promotional than critical engagement, with a slight undercurrent of snark suggesting she wasn't falling for the moonshine without saying why exactly or why it might matter. I'm not linking to the piece or naming my interlocutor because, as I said, I still rather like her, and by now I can't say that I am particularly surprised at the rather lame product eventuating from our (and her other) conversations. She is a fine writer but I don't think there is much of an appetite for real political or cultural criticism of futurological discourse in pop-tech circles, at any rate when it doesn't take the form of making fun of nerdy nerds or indulging in disasterbatory hyperbole.
The transhumanists, singularitarians, techno-immortalists, digi-utopians, geo-engineers and other assorted futurological nuts I corral under the parodic designation "Robot Cultists" remain sufficiently dedicated to their far-out viewpoints that they do still continue to attract regular attention from journalists and the occasional academic looking for a bit of tech drama or tech kink to spout about. I actually think the robocultic sub(cult)ure is past its cultural heyday, but its dwindling number of stale, pale, male enthusiasts has been more than compensated lately by the inordinate amount of high-profile "tech" billionaires who now espouse aspects of the worldview in ways that seem to threaten to have Implications, or at least make money slosh around in ways it might not otherwise do.
Anyway, as somebody who has been critiquing and ridiculing these views in public places for over a quarter century I, too, attract more attention than I probably deserve from journalists and critics who stumble upon the futurological freakshow and feel like reacting to it. For the last decade or so I have had extended exchanges with two or three writers a year, on average, all of whom have decided to do some sort of piece or even a book about the transhumanists. For these futurologically-fascinated aficionados I inevitably provide reading lists, contacts, enormous amounts of historical context, ramifying mappings of intellectual and institutional affiliation, potted responses to the various futurological pathologies they happen to have glommed onto, more or less offering an unpaid seminar in reactionary futurist discourse.
Articles do eventually appear sometimes. In them I am sometimes a ghostly presence, offering up a bit of decontextualized snark untethered to an argument or context to give it much in the way of rhetorical force. But far more often the resulting pieces of writing neither mention me nor reflect much of an engagement with my arguments. As a writer really too polemical for academia and too academic for popular consumption, I can't say that this result is so surprising. However, lately I have made a practice of keeping my side of these exchanges handy so that I can at least post parts of them to my blog to see the light of day. What follows is some comparatively pithy Q & A from the latest episode of this sort of thing, edited, as it were, to protect those who would probably prefer to remain nameless in this context:
Q & A:
Q: What do you think the key moral objections are to transhumanism?
Well, I try not to get drawn into discussions with futurists about whether living as an immortal upload in the Holodeck or being "enhanced" into a sexy comic book superhero body or being ruled by a superintelligent AI would be "good" or "bad." None of these outcomes are going to arrive to be good or bad anyway, none of the assumptions on which these prophetic dreams are based are even coherent, really, so the moral question (or perhaps this is a more a question for a therapist) should probably be more like -- Is it good or bad to be devoting time to these questions rather than to problems and possibilities that actually beset us? What kind of work is getting done for the folks who give themselves over to infantile wish-fulfillment fantasizing on these topics? Does any of this make people better able to cope with shared problems or more attuned to real needs or more open to possibilities for insight or growth?
You know, in speculative literature the best imaginative and provocative visions have some of the same sort of furniture in them you find in futurological scenarios -- intelligent artifacts, powerful mutants, miraculous abilities -- but as in all great literature, their strangeness provides the distance or slippage that enables us to think more critically about ourselves, to find our way to sympathetic identification with what might otherwise seem threatening alienness, to overcome prejudices and orthodoxies that close us off to hearing the unexpected that changes things for the better. Science fiction in my view isn't actually about predictions at all, or it is only incidentally so: it is prophetic because finds the open futurity in the present world, it builds community from the strangenessand promise in our shared differences.
But futurism and tech-talk isn't prophetic in this sense at all, when you consider it more closely -- it operates much more like advertising does, promising us easy money, eternal youth, technofixes to end our insecurities, shiny cars, skin kreme, boner pills. The Future of the futurists is stuck in the parochial present like a gnat in amber. It freezes us in our present prejudices and fears, and peddles an amplification of the status quo as "disruption," stasis as "accelerating change." Futurology promises to "enhance" you -- but makes sure you don't ask the critical questions: enhances according to whom? for what ends? at what costs? Futurology promises you a life that doesn't end -- but makes sure you don't ask the critical questions: what makes a life worth living? what is my responsibility in the lives of others with whom I share this place and this moment? Futurology promises you intelligent gizmos -- but makes sure you don't ask the critical questions: if I call a computer or a car "intelligent," how does that change what it means to call a human being or a great ape or a whale intelligent? what happens to my sense of the intelligence lived in bodies and incarnated in historical struggles if I start "recognizing" it in landfill-destined consumer devices? I think the urgent moral questions for futurologists have less to do with their cartoonish predictions but with the morality of thinking futurologically at all, rather than thinking about real justice politically and real meaning ethically and real problems pragmatically.
Q: Why do you think climate change denial is so rife among this movement?
Many futurologists like to declare themselves to be environmentalists, so this is actually a tricky question. I think it might be better to say futurism is about the displacement rather than the outright denial of catastrophic anthropogenic climate change. For example, you have futurists like Nick Bostrom and Elon Musk who will claim to take climate change seriously but then who will insist that the more urgent "existential risk" humans face is artificial superintelligence. As climate refugees throng tent-cities and waters flood coastal cities and fires rage across states and pandemic disease vectors shift with rising temperatures these Very Serious futurological pundits offer up shrill warnings of Robocalypse.
Since the birth of computer science, generation after generation after generation, its intellectual luminaries have been offering up cocksure predictions about the imminence of world changing artificial intelligence, and they have never been anything but completely wrong about that. Isn't that rather amazing? The fact is that we have little scientific purchase on the nature of human intelligence and the curiously sociopathic body-alienated models of "intelligence" that suffuse AI-enthusiast subcultures don't contribute much to that understanding -- although they do seem content to code lots of software that helps corporate-military elites treat actually intelligent human beings as if we were merely robots ourselves.
Before we get to climate change denial, then, I think there are deeper denialisms playing out in futurological sub(cult)ures -- a terrified denial of the change that bedevils the best plans of our intelligence, a disgusted denial of the aging, vulnerable, limited, mortal body that is the seat of our intelligence, a horrified denial of the errors and miscommunications and humiliations that accompany the social play of our intelligence in the world. Many futurists who insist they are environmentalists like to talk about glorious imaginary "smart" cities or give PowerPoint presentations about geo-engineering "technofixes" to environmental problems in which profitable industrial corporate-military behemoths save us from the destruction they themselves have caused in their historical quest for profits. The futurists talk about fleets of airships squirting aerosols into the atmosphere, dumping megatons of filings into the seas, building cathedrals of pipes to cool surface temperatures with the deep sea chill, constructing vast archipelagos of mirrors in orbit to reflect the sun's rays -- and while they are hyperventilating these mega-engineeering wet-dreams they always insist that politics have failed, that we need a Plan B, that our collective will is unequal to the task. Of course, this is just another variation of the moral question you asked already. None of these boondoggle fantasies will ever be built to succeed or fail in the first place, there is little point in dwelling on the fact that we lack the understanding of eco-systemic dynamics to know whether the impacts of such pharaohnic super-projects would be more catastrophic than not, the whole point of these exercises is to distract the minds of those who are beginning to grasp the reality of our shared environmental responsibilities from the work of education, organization, agitation, legislation, investment that can be equal to this reality. Here, the futurological disgust with and denial of bodies, embodied intelligence, becomes denial of the material substance of political change, of historical struggle, bodies testifying to violation and to hope, assembled in protest and in collaboration.
Many people have been outraged recently to discover that Exxon scientists have known the truth about their role in climate catastrophe for decades and lied about it to protect their profits. But how many people are outraged that just a couple of years ago Exxon-Mobile CEO Rex Tillerson declared that climate change is simply a logistical and engineering problem? This is the quintessential form that futurological climate-change displacement/denialism takes: it begins with an apparent concession of the reality of the problem and then trivializes it. Futurology displaces the political reality of crisis -- who suffers climate change impacts? who dies? who pays for the mitigation efforts? who regulates these efforts? who is accountable to whom and for what? who is most at risk? who benefits and who profits from all this change? -- into apparently "neutral" technical and engineering language. Once this happens the demands and needs diversity of the stakeholders to change vanish and the technicians and wonks appear, white faces holding white papers enabling white profits.
Q: What are the most obvious historical antecedents to this kind of thinking?
Futurological dreams and nightmares are supposed to inhabit the bleeding edge, but the truth is that their psychological force and intuitive plausibility draws on a deeply disseminated archive of hopes and tropes... Eden, Golem, Faust, Frankenstein, Excaliber, Love Potions, the Sorcerer's Apprentice, the Ring of Power, the Genie in a Bottle, the Fountain of Youth, Rapture, Apocalypse and on and on and on.
In their cheerleading for superintelligent AI, superpowers/techno-immortalism, and digi-nano-superabundance it isn't hard to discern the contours of the omni-predicates of centuries of theology, omniscience, omnipotence, omnibenevolence. Patriarchal priests and boys with their toys have always marched through history hand in hand. And although many futurologists like to make a spectacle of their stolid scientism it isn't hard to discern the old fashioned mind-body dualism in their digital-utopian virtuality uploading fantasies. Part of what it really means to be a materialist is to take materiality seriously, which means recognizing that information is always instantiated on a non-negligible material carrier, which means it actually matters that all the intelligence we know as such as yet has been biologically incarnated. There is a difference that should make a difference to a materialist in the aria sung in the auditorium, heard on vinyl, pulled up on .mp3. Maybe something like intelligence can be materialized otherwise, but will it mean all that intelligence means to us in an imaginative, empathetic, responsible, rights-bearing being sharing our world? And if it doesn't is "intelligence" really the word we should use or imagine using to describe it?
Fascination with artifacts that seem invested with spirit -- puppets, carnival automata, sex-dolls are as old or older than written history. And of course techno-fetishism, techno-reductionism, and techno-triumphalism has been with us since before the Treaty of Westphalia ushered in the nation-state modernity that has preoccupied our attention with culture wars in the form of les querelles des anciens et des modernes right up to our late modern a-modern post-modern post-post-modern present: big guns and manifest destinies, eugenic rages for order, deaths of god and becoming as gods, these are all old stories. The endless recycling of futurological This! Changes! Everything! headlines about vat-grown meat and intelligent computers and cost-free fusion and cures for aging every few years or so is the consumer-capitalist froth on the surface of a brew of centuries old techno-utopian loose-talk and wish-fulfillment fantasizing.
Q: Why should people be worried about who is pushing these ideas?
Of course, all of this stuff is ridiculous and narcissistic and technoscientifically illiterate and all too easy to ignore or deride... and I do my share of that derision, I'll admit that. But you need only remember the example of the decades long marginalized Neoconservative foreign-policy "Thought Leaders" to understand the danger represented by tech billionaires and their celebrants making profitable promises and warnings about super-AI and immortality-meds and eco escape hatches to Mars. A completely discredited klatch of kooks who fancy themselves the Smartest Guys in the Room can cling to their definitive delusions for a long time -- especially if the nonsense they spew happens to bolster the egos or rationalize the profits of very rich people who want to remain rich above all else. And eventually such people can seize the policy making apparatus long enough to do real damage in the world.
For over a generation the United States has decided to worship as secular gods a motley assortment of very lucky, rather monomaniacal, somewhat sociopathic tech venture capitalists few of whom every actually made anything but many of whom profitably monetized (skimmed) the collective accomplishments of nameless enthusiasts and most of whom profitably marketed (scammed) gizmos already available and usually discarded elsewhere as revolutionary novelties. The futurologists provide a language in which these skim and scam operators can reassure themselves that they are Protagonists of History, shepherding consumer-sheeple to techno-transcendent paradise and even godlikeness. It is a mistake to dismiss the threat represented by such associations -- and I must say that in the decades I have been studying and criticizing futurologists they have only gained in funding, institutional gravity, and reputational heft, however many times their animating claims have been exposed and pernicious nonsense reviled.
But setting those very real worries aside, I also think the futurologists are interesting objects and subjects of study because they represent a kind of reductio ad absurdum of prevailing attitudes and assumptions and aspirations and justificatory rhetoric in neoliberal, extractive-industrial, consumer-oriented, marketing-suffused, corporate-military society: if you can grasp the desperation, derangement and denialism of futurological fancies, it should put you in a better position to grasp the pathologies of more mainstream orthodoxies in our public discourse and authorizing institutions, our acquiescence to unsustainable consumption, our faith in technoscientific, especially military, circumventions of our intractable political problems, our narcissistic insistence that we occupy a summit from which to declare differences to be inferiorities, our desperate denial of aging, disease, and death and the death-dealing mistreatment of others and of ourselves this denialism traps us in so deeply.
Q (rather later): [O]ne more thing: who were the most prominent members of the extropians list? Anyone I've missed? Were R.U Sirius or other Wired/BoingBoing writers and editors on the list? Or engineers/developers etc?
Back in Atlanta in the 1990s, I read the Extropy zine as a life-long SF queergeek drawn to what I thought were the edges of things, I suppose, and I was a lurker on the extropians list in something like its heyday. This was somewhere in the '93-'99 range, I'm guessing. I posted only occasionally since even then most of what I had to say was critical -- the philosophy seemed like amateur hour and the politics were just atrocious -- and it seemed a bit wrong to barge into their clubhouse and piss in the punch bowl if you know what I mean... I was mostly quiet.
The posters I remember as prominent were Max and Natasha-Vita More, of course, Eliezer Yudkowsky, Damien Broderick (an Australian SF writer), Eugen Leitl, Perry Metzger, Hal Finney, Sasha Chislenko, Mark Plus, Giulio Prisco, Ramona Machado, Nancy Lebovitz… You know, people tend to forget the women's voices because it was such an insistently white techbro kinda sorta milieu. I'm not sure how many women stuck with it, although Natasha is definitely a piece of work, and Romana was doing something of a proto Rachel Haywire catsuited contrarian schtick, Haywire's a more millennial transhumanoid who wasn't around back then. Let's see. There was David Krieger too (I made out with him at an extropian pool party in the Valley of the Silly Con back in 95, I do believe).
I don't think I remember RU Sirius ever chiming in, I personally see him as more of an opportunistic participant/observer/stand-up critic type, really, and I know I remember Nick Szabo's name but I'm not sure I remember him posting a lot. You mentioned Eric Drexler, but I don't remember him posting, he was occasionally discussed and I know he would appear at futurist topic conferences with transhumanoid muckety mucks like More and the cypherpunks like Tim May and Paul Hughes. I do remember seeing Christine Peterson a couple of times.
Wired did a cover story called "Meet The Extropians" which captures well some of the flavor of the group, that was from 1993. Back then, I think techno-immortalism via cryonics and nanobot miracle medicine was the big draw (Aubrey de Grey appeared a bit later, I believe, but the sub(cult)ure was ready for him for sure), with a weird overlap of space stuff that was a vestige from the L5 society and also a curious amount of gun-nuttery attached to the anarcho-capitalist enthusiasm and crypto-anarchy stuff.
It's no surprise that bitcoinsanity had its birth there, and that the big bucks for transhumanoid/ singularitarian faith-based initiatives would come from PayPal billionaires like the terminally awful robocultic reactionary Peter Thiel, given the crypto-currency enthusiasm. Hal Finny was a regular poster at extropians and quite a bitcoin muckety muck right at the beginning -- I think maybe he made the first bitcoin transaction in fact.
Back in those days I was working through connections of technnocultural theory and queer theory in an analytic philosophy department in Georgia, and the extropians -- No death! No taxes! -- seemed to epitomize the California Ideology. I came to California as a Queer National with my mind on fire to work with Judith Butler, and I was lucky enough to spend a decade learning from her in the Rhetoric Department at Berkeley, where I ended up writing my diss about privacy and publicity in neoliberal technocultures, Pancryptics. But I never lost sight of the transhumanists -- they seemed and still seem to me to symptomize in a clarifying extreme form the pathologies of our techno-fetishistic, techno-reductionist, techno-triumphalist disaster capitalism. Hope that helps!
Q (much later): Tackling this thing has been a lot more difficult than I imagined it would be. Right now it's sitting on 20,000 words and has to come down to at least half that (pity my editor!). I've gone through quite a journey on it. I still think very much that these ideas are bad and a reflection of a particularly self-obsessed larger moment, and that people should be extremely concerned about how much money is going into these ideas that could be so much better spent elsewhere. The bizarre streak of climate denialism is likewise incredibly disturbing…. But then I kind of came around in a way to sympathising with what is ultimately their fear which is driving some of this, an incredibly juvenile fear of dying. But a fear of being old and infirm and in mental decline in a society that is in denial about the realities of that, and which poses few alternatives to that fate for all of us, in a way I can understand that fear…. In any case, amazing that they let you proof read [their official FAQ] for them, even though you are so critical of their project! Or do you think they were just grateful for someone who could make it read-well on a sentence level?
You have my sympathies, the topic is a hydra-headed beast when you really dig in, I know. Nick Bostrom and I had a long phone conversation in which I leveled all sorts of criticisms of transhumanism. That I was a critic was well known, but back then socialist transhumanist James Hughes (who co-founded IEET with him) and I were quite friendly, and briefly I was even "Human Rights" fellow at IEET myself -- which meant that they re-published some blog posts of mine. (I write about that and its rather uncongenial end here.) Anyway, Bostrom and I had a wide-ranging conversation that took his freshly written FAQ as our shared point of departure. He adapted/qualified many claims in light of my criticisms, but ignored a lot of them as well and of course the central contentions of the critique couldn't be taken up without, you know, giving up on transhumanism. As a matter of fact, we didn't get past the first half of the thing. It was a good conversation though, I remember it was even rather fun. I do take these issues seriously as you know and, hell, I'll talk to anybody who is going to listen in a real way.
You know, I've been criticizing futurism for decades -- there were times when I was one of the few people truly informed of their ideas even if I was critical of them, and some of them appreciated the chance to sharpen their arguments on a critic. I've had many affable conversations with all sorts of these folks, Aubrey de Grey, Robin Hanson, Max More even. The discourse is dangerous and even evil in my opinion, but its advocates are human beings which usually means conversations can happen face to face.
I know what you mean when you say you sympathize after a fashion upon grasping the real fear of mortality driving so much of their project -- and I would say also the fear of the uncontrollable role of chance in life, the vulnerability to error and miscommunication in company. But you know reactionary politics are always driven by fear -- and fear is always sad. I mean, the choices are love or fear when it comes down to it, right? And to be driven by fear drives away so much openness to love and there's no way to respond to that but to see the sadness of it -- when it comes to it these fears are deranging sensible deliberation about technoscientific change at a historical moment when sense is urgently needed, these fears make them dupes, and often willing ones, of plutocratic and death-dealing elites, these fears lead them to deceive themselves and deceive others who are also vulnerable. One has to be clear-headed about such things, seems to me.
Q (still later): Have entered new phase: What if the Extropians were just a Discordian-type joke that other people came to take seriously?
Yes, they're a joke. But it's on us, and they aren't in on it. As I mentioned before, the better analogy is the Neocons: they were seen as peddlers of nonsense from the perspective of foreign policy professionals (even most conservatives thought so) but they were well-funded because their arguments were consoling and potentially lucrative to moneyed elites and eventually they stumbled into power via Bush and Cheney whereupon they implemented their ideas with predictable (and predicted) catastrophic consequences in wasted lives and wasted wealth. To be clear: the danger isn't that transhumanoids will code a Robot God or create a ruler species of immortal rich dudes with comic-book sooper-powers, but that they will divert budgets and legislation into damaging policies and dead ends that contribute to neglected health care, dumb and dangerous software, algorithmic harassment and manipulation, ongoing climate catastrophe, the looting of public and common goods via "disruptive" privatization, exploitative "development," cruel "resilience," and upward-failing techbro "Thought Leadership."
18 comments:
> I have no doubt she could easily write a piece about
> futurology quite as excoriating as the sort I do, . . .
> I suspect she had drafts that managed the trick --
> but the published result was a puff-piece. . .
Presumably you can blame the editorial policies, or decisions,
of that "high-profile tech publication".
Balance, fairness, avoidance of undue controversy -- that's
what editors do, isn't it? Of course, by making it a puff
piece, or a "human interest narrative... of a handful of
zany robocultic personalities" the underlying message targeted
to the "sophisticated" reader can be "this article,
and the movement it describes, is pure entertainment --
you might as well be reading _People_ or _Us_ here"
while the enthusiast can think "Wow, look at all the
mainstream coverage we're getting!"
Plausible deniability all around.
And that subtly snarky, knowing, distancing attitude --
isn't that a major element of what they used to call "Timestyle"
(after the magazine that invented it)?
> [Y]ou have futurists like Nick Bostrom and Elon Musk who will
> claim to take climate change seriously but then who will insist
> that the more urgent "existential risk" humans face is artificial
> superintelligence. . .
>
> But you know reactionary politics are always [d]riven by fear --
> and fear is always sad.
You know, hard as it may be for more sober-minded folks to grasp,
there apparently **really are** people in the world who take
the Evil-AI-taking-over-the-world scenario seriously, and not just
con artists looking to solicit money for their "institutes"
or to further their academic careers. Bill Joy presumably really
did get the fantods back in 2000 after Ray Kurzweil chatted him up
in a bar about the coming technological Singularity. And apparently
people on Less Wrong really did get nightmares and anxiety attacks
about Roko's Basilisk.
I gather that Scott Siskind (aka Scott Alexander of the "Slate Star
Codex" blog and the "Slate Star Scratchpad" Tumblr) -- and this guy
is a **psychiatrist** for crying out loud (or at least a
psychiatrist-in-training) -- **really really** takes this stuff seriously
(and that's why he's so invested in Yudkowsky/MIRI/LessWrong).
http://slatestarcodex.com/2015/12/17/should-ai-be-open/
-------------
December 17, 2015
by Scott Alexander
. . .
The decision to make AI findings open source is a tradeoff
between risks and benefits. The risk is letting the most
careless person in the world determine the speed of AI
research – because everyone will always have the option
to exploit the full power of existing AI designs, and
the most careless person in the world will always be the
first one to take it. The benefit is that in a world
where intelligence progresses very slowly and AIs are
easily controlled, nobody will be able to use their
sole possession of the only existing AI to garner too
much power.
Unfortunately, I think we live in a different world – one
where AIs progress from infrahuman to superhuman intelligence
very quickly, very dangerously, and in a way very difficult
to control unless you’ve prepared beforehand. . .
====
There's been some chitchat on Tumblr in the past couple of
days sparked by the recent OpenAI announcement and Siskind's
reaction to it.
http://su3su2u1.tumblr.com/post/135473918278/unfortunately-i-think-we-live-in-a-different
-------------
> Unfortunately, I think we live in a different world. . .
>
> -- @slatestarscratchpad
Yay, we get to have this discussion again!
I call dibs on calling bullshit before anyone else!
====
http://su3su2u1.tumblr.com/post/135584613793/in-slatestars-open-ai-piece-scott-says-many
-------------
> Anonymous asked:
>
> In slatestar's open AI piece Scott says "many thinkers in this
> field including Nick Bostrom and Eliezer Yudkowsky worry..."
> and more generally refers to his piece on AI risk to suggest
> a consensus (with the 'Bostromian' view) in principle on the
> dangers of AI (if not actually in line with 'risk research').
> I don't wish to dismiss this as "researchers' incentives for
> funding lead to people chasing hype [even if from dubious sources]".
> Thoughts on a reasonable response to the claim?
Well, first, Bostrom and Yudkowsky aren’t really technical
researchers. As a slight metaphor, they are more like philosophy
of science than actual science. They aren’t really publishing
technical CS work. So how is “the field” being defined? Describing
them as in the technical AI field is enormously misleading.
Now, there are a few actual machine learning/AI researchers who
do say that maybe this is something worth worrying about, but
it’s a minority. Also, the majority of the people who say that
it’s worth worrying about generally aren’t putting their money
where their mouth is- their research plans are the same they’ve
always been. I think this puts a bound on how seriously they
really take the problem.
====
And cf. Ben Goertzel on "The Singularity Institute's Scary Idea"
http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html
(via
http://amormundi.blogspot.com/2013/03/realtime-robocult-id-and-ick.html
http://amormundi.blogspot.com/2013/01/a-robot-god-apostles-creed-for-less.html
http://amormundi.blogspot.com/2013/10/wired-discussion-of-techno-immortalist.html )
and
The Fallacy of Dumb Superintelligence
By Richard Loosemore
http://ieet.org/index.php/IEET/more/loosemore20121128
There is often a discursive/subcultural co-dependence between profit/attention-seeking con artists and earnestly paranoid, usefully idiotic conspiracists, surely? That isn't confined to robocultism -- reactionary politics and New Age subcultures both offer photogenic examples.
> I actually think the robocultic sub(cult)ure is past its cultural
> heyday. . .
Yeah, it kind of hit its peak for me before The Future (for my
generation, that was the year 2000) actually arrived.
Nevertheless, some folks are still partying like it's 1999.
via http://ieet.org/index.php/IEET/more/pellissier20151221 :
http://motherboard.vice.com/read/the-turing-church-preaches-the-religion-of-the-future
-----------------
The Turing Church Preaches the Religion of the Future
by Andrew Paul
December 16, 2015
[T]he Italian theoretical physicist and computer scientist [talks]
about his latest, and, to some, most quixotic endeavor: the Turing Church,
a transhumanist group that he hopes will curate the crowdsourcing
of a techno-rapture. In many ways, Prisco and his supporters want
to provide a literal faith in the future.
It’s one of the newest in a multitude of quasi-religious movements,
all vying for a place in the rapidly changing futurist landscape.
Prisco is carving out a digital space for what he hopes will store
the building blocks for the construction of humanity’s direction. . .
====
Don’t Worry, Intelligent Life Will Reverse the Slow Death of the Universe
-----------------
By Giulio Prisco
Turing Church
Posted: Aug 13, 2015
A scientific paper announcing that the universe is slowly
dying is making waves on the Internet. But don’t worry, intelligent
life will be able to do something about that. . .
====
But will intelligent life still be watching _Annie Hall_?
-----------------
Alvy's mother: He's been depressed. All of a sudden, he can't do anything.
Doctor: Why are you depressed, Alvy?
Alvy's mother: Tell Dr. Flicker. (To the doctor) It's something he read.
Doctor: Something he read, huh?
Alvy: The universe is expanding...Well, the universe is everything,
and if it's expanding, some day it will break apart and that will be
the end of everything.
Alvy's mother: What is that your business? (To the doctor) He stopped
doing his homework.
Alvy: What's the point?
Alvy's mother: What has the universe got to do with it? You're here in Brooklyn.
Brooklyn is not expanding.
Doctor: It won't be expanding for billions of years, yet Alvy. And we've
got to try to enjoy ourselves while we're here, huh, huh? Ha, ha, ha.
====
https://www.youtube.com/watch?v=5U1-OmAICpU
> I actually think the robocultic sub(cult)ure is past its cultural
> heyday, but its dwindling number[s]. . . [have] been more than compensated
> lately by the inordinate amount of high-profile "tech" billionaires
> who now espouse aspects of the worldview in ways that [will]. . .
> make money slosh around in ways it might not otherwise do.
Sloshing around (via https://plus.google.com/+AlexanderKruel/posts ):
http://futureoflife.org/2015/12/17/the-ai-wars-the-battle-of-the-human-minds-to-keep-artificial-intelligence-safe/
----------------
At the start of 2015, few AI researchers were worried
about AI safety, but that all changed quickly. Throughout
the year, Nick Bostrom’s book, Superintelligence: Paths,
Dangers, Strategies, grew increasingly popular. The Future
of Life Institute held its AI safety conference in Puerto Rico.
Two open letters regarding artificial intelligence and
autonomous weapons were released. Countless articles
came out, quoting AI concerns from the likes of Elon Musk,
Stephen Hawking, Bill Gates, Steve Wozniak, and other
luminaries of science and technology. Musk donated $10 million
in funding to AI safety research through FLI. Fifteen million
dollars was granted to the creation of the Leverhulme Centre
for the Future of Intelligence. And most recently, the
nonprofit AI research company, OpenAI, was launched to
the tune of $1 billion, which will allow some of the top
minds in the AI field to address safety-related problems
as they come up.
====
(via http://hplusmagazine.com/2015/12/21/29415/
Rise of the Robots: Disruptive Technologies,
Artificial Intelligence & Exponential Growth)
https://www.youtube.com/watch?v=J9G7ziqvJPM
----------------
Ivar Moesman on Exponential Growth of Technology: Disruptions,
Implications, 3D printing & Bitcoin
Ivar Moesman (@ivarivano) discusses exponential growth of technology,
how it disrupts existing industries and some learnings and implications.
Second part is an introduction to 3D printing and the third part
is about Bitcoin. First part is inspired by Ray Kurzweil and
Peter Diamandis & Steven Kotler‘s book BOLD. For the Bitcoin
part with special thank and admiration to Andreas M. Antonopoulos,
the bitcoin core developers, Roger Ver, Trace Mayer, Eric Voorhees,
Charlie Shrem, Gavin Andresen, the Bitcoin knowledge podcast,
Let’s talk bitcoin, Epicenter Bitcoin.
Dr. Ben Goertzel (@bengoertzel) is widely recognized as the father
of Artificial General Intelligence. In this talk he discusses:
AI, artificial intelligence, artificial general intelligence,
deep learning, life extension, longevity, robotics, humanoid,
transhumanism.
Professor Doctor De Garis discusses species dominance, artilects,
cosmists, terrans, cyborgists, artilect war, gigadeath.
====
Ben Goertzel is the "father of Artificial General Intelligence"?
Well, at least he admits he didn't coin the **phrase**:
http://wp.goertzel.org/who-coined-the-term-agi/
----------------
August 28, 2011
In the last few years I’ve been asked increasingly often if
I invented the term “AGI” – the answer is “not quite!”
I am indeed the one responsible for spreading the term around
the world. . . But I didn’t actually coin the phrase. . .
In 2002 or so, Cassio Pennachin and I were editing a book on
approaches to powerful AI, with broad capabilities at the human
level and beyond, and we were struggling for a title. Shane Legg. . .
came up with Artificial General Intelligence. . .
A few years later, someone brought to my attention that. . .
Mark Gubrud. . . had used the term in a 1997 article on the
future of technology and associated risks. . .
====
> . . .past its. . . heyday. . .
http://motherboard.vice.com/read/we-need-to-talk-about-how-we-talk-about-artificial-intelligence
--------------
Elon Musk Calling Artificial Intelligence a 'Demon'
Could Actually Hurt Research
by Jordan Pearson
October 29, 2014
Elon Musk drags the future into the present. He’s disrupted
space with his scrappy rocket startup SpaceX and played
a key role in making electric vehicles cool with Tesla Motors.
Because of this, when Musk talks about the future, people
listen. That’s what makes his latest comments on artificial
intelligence so concerning.
Musk has a growing track record of using trumped-up rhetoric
to illustrate where he thinks artificial intelligence research
is heading. Most recently, he described current artificial
intelligence research as “summoning the demon,” and called
the malicious HAL 9000 of 2001: A Space Odyssey fame a “puppy dog”
compared to the AIs of the future. Previously, he’s explained
his involvement in AI firm DeepMind as being driven by his
desire to keep an eye on a possible Terminator situation developing.
This kind of talk does more harm than good, especially when
it comes from someone as widely idolised as Musk.
Ultimately, Musk’s comments are hype; and hype, even when negative,
is toxic when it comes to research. As Gary Marcus noted in a
particularly sharp New Yorker essay last year, cycles of intense
public interest, rampant speculation, and the subsequent
abandonment of research priorities have plagued artificial
intelligence research for decades. The phenomenon is known as
an “AI winter”—recurring periods when funding for AI research
has dried up after researchers couldn’t deliver on the promises
that the media, and researchers themselves, made.
As described in Daniel Crevier’s 1993 book outlining the history
of AI research, perhaps the most infamous example of an AI winter
occurred during the 1970s, when DARPA de-funded many of its
projects aimed at developing intelligent machines after many
of its initiatives failed to produce the results they expected.
Yann LeCun, the head of Facebook’s AI lab, summed it up in a Google+
post back in 2013: “Hype is dangerous to AI. Hype killed AI four
times in the last five decades. AI Hype must be stopped.” What
would happen to the field if we can’t actually build a fully functional
self-driving car within five years, as Musk has promised? Forget
the Terminator. We have to be measured in how we talk about AI.
====
Coupled with Moore's Law running out of steam, things might be
getting pretty chilly again for AI in the next 10 or 15 years.
Bummer! I like new toys as much as the next guy. Where's that
shiny new 3D quantum memristor neuromorphic thingy?
“Hype is dangerous to AI. Hype killed AI four times in the last five decades."
Notice that this utterance is self-contradictory. Things that actually die have to be killed only once. Of course, there has never been an "AI" to kill. Far from being "dangerous" to AI-discourse, hype is all there is to AI-discourse. (Problems in computer science, user-friendliness, network security, of course, need have nothing to do with AI-discourse.)
> . . .making fun of nerdy nerds or indulging in
> disasterbatory hyperbole. . .
http://www.scottaaronson.com/blog/?p=2307
-----------------
Shtetl-Optimized
The Blog of Scott Aaronson
If you take just one piece of information from this blog:
Quantum computers would not solve hard search problems
instantaneously by simply trying all the possible solutions
at once.
The End of Suffering?
June 1st, 2015
A computer science undergrad who reads this blog recently
emailed me about an anxiety he’s been feeling connected to
the Singularity -- **not** that it will destroy all human life,
but rather that it will make life suffering-free and therefore
no longer worth living (more _Brave New World_ than
_Terminator_, one might say). . .
It’s fun to think about these questions from time to time, to
use them to hone our moral intuitions -- and I even agree with
Scott Alexander that it’s worthwhile to have a small number of
smart people think about them full-time for a living. But I
should tell you that, as I wrote in my post The Singularity Is Far,
I don’t expect a Singularity in my lifetime or my grandchildrens’
lifetimes. Yes, technically, if there’s ever going to be a
Singularity, then we’re 10 years closer to it now than we were
10 years ago, but it could still be one hell of a long way away!
And yes, I expect that technology will continue to change in my
lifetime in amazing ways—not as much as it changed in my
grandparents’ lifetimes, probably, but still by a lot -- but how
to put this? I’m willing to bet any amount of money that when
I die, people’s shit will still stink.
===
Hmm. As for that CS undergrad, I'd probably suggest he
read a couple of the late Iain M. Banks' "Culture" novels.
;->
http://hplusmagazine.com/2015/12/22/the-virtuous-circle-of-fantasy/
--------------
The Virtuous Circle of Fantasy
December 22, 2015
Dan Lemire
[. . .has a B.Sc. and a M.Sc. in Mathematics from the
University of Toronto, and a Ph.D. in Engineering
Mathematics from the Ecole Polytechnique and the
Université de Montréal. He is a computer science
professor at the Université du Québec. . .]
It has long been observed that progress depends on the outliers
among us. Shaw’s quote sounds a true today as it did in the past:
“The reasonable man adapts himself to the world: the unreasonable
one persists in trying to adapt the world to himself. Therefore
all progress depends on the unreasonable man.”
I have never heard anyone argue against this observation.
Think about a world where starvation and misery is around the
corner. You are likely to put a lot of pressure on your kids
so that they will conform. Now, think about life in a wealthy
continent like North America in 2015. I know that my kids are
not going to grow up and starve no matter what they do. So I
am going to be tolerant about their career choices. And that’s
a good thing. Had Zuckerberg been my son and had I been poor,
I might have been troubled to see him dropping out of Harvard
to build a “facebook” site. Dropping out of Harvard to build
Facebook was pure fantasy. No parent afraid that his son could
starve would have tolerated it.
This blog is also fantasy. Instead of doing “serious research”,
I write down whatever comes through my mind and post it online.
My blog counts for nothing as far as getting me academic currency.
I have been warned repeatedly that, should I seek employment,
having a blog where I freely shared controversial views could
be held against me… To make matters worse, you, my readers,
are “wasting time” reading me instead of the Financial times
or an Engineering textbook.
The more fantasy we allow, the more progress we enable, and
that in turn enables more fantasy.
There are people who don’t like fantasy one bit, like the radical
islamists. I don’t think that they fear or hate the West so much
as they are afraid of the increasing numbers of people who decide
to be unreasonable. Unreasonable people are like dynamite,
they can destroy your world view. They are disturbing.
There is one straightforward consequence of this analysis:
**Fantasy is growing exponentially.**
====
Calling all Elves. Get your sorry asses back to Middle-earth --
it's time to forge shiny new Rings of Power!
;->
> Shaw’s quote sounds a true today as it did in the past:
>
> “The reasonable man adapts himself to the world: the unreasonable
> one persists in trying to adapt the world to himself. Therefore
> all progress depends on the unreasonable man.”
>
> I have never heard anyone argue against this observation.
Oh, I have.
"Unluckily, it is difficult for a certain type of mind to grasp
the concept of insolubility. Thousands...keep pegging away at
perpetual motion. The number of persons so afflicted is far
greater than the records of the Patent Office show, for beyond the
circle of frankly insane enterprise there lie circles of more and
more plausible enterprise, until finally we come to a circle which
embraces the great majority of human beings.... The fact is that
some of the things that men and women have desired most ardently
for thousands of years are not nearer realization than they were
in the time of Rameses, and that there is not the slightest reason
for believing that they will lose their coyness on any near
to-morrow. Plans for hurrying them on have been tried since the
beginnning; plans for forcing them overnight are in copious and
antagonistic operation to-day; and yet they continue to hold off
and elude us, and the chances are that they will keep on holding
off and eluding us until the angels get tired of the show, and the
whole earth is set off like a gigantic bomb, or drowned, like a
sick cat, between two buckets."
-- H. L. Mencken, "The Cult of Hope"
A bit of >Hist dirty laundry.
"EA" stands for "Effective Altruism", and in transhumanist
circles in recent years, it's been co-opted
to mean donating money to Eliezer Yudkowsky's "Machine Intelligence
Research Institute" in order to prevent unFriendly superintelligence
from taking over the world. It must've seemed like a brilliant
fund-raising strategy when somebody (Luke Muehlhauser?) first came up with
it, but it blew up in their faces when Holden Karnofsky of
GiveWell gave SIAI/MIRI a "thumbs down".
[ http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/ ]
And (3 years later) IEET has piled on.
(via
http://rationalwiki.org/wiki/Talk:LessWrong )
http://ieet.org/index.php/IEET/print/10669
---------------
Effective Altruism has Five Serious Flaws - Avoid It -
Be a DIY Philanthropist Instead
Hank Pellissier
July 13, 2015
In an earlier essay[*] I recommended the Effective Altruism (EA)
movement, the humanitarian crusade spearheaded by philosopher
Peter Singer.
Today, I retract my support. . .
FLAW #4: EA’s Weird, Wrong Alliance with MIRI
(Machine Intelligence Research Institute)
MIRI is a Berkeley-based research team that was
previously-titled SIAI (Singularity Institute for
Artificial Intelligence). MIRI has a history of
arrogance and aggressiveness, justified in their minds,
I suppose, by their opinion that the future of the world
depends on their ability to help create Friendly AI.
MIRI has the financial support of Peter Thiel, who is
worth $2.2 billion on Forbes The Midas List. MIRI isn’t
curing disease or helping the poor; it’s budget pays
the salaries of its aloof, we’re-more-rational-than-you
researchers. I’m dismayed that MIRI has infiltrated EA.
Two of the recommended introductory essays on the
Effective Altruism organization site are written by MIRI
members. Posted second, right under Singer’s preface
article, is a math-wonky article by SIAI/MIRI founder
Eliezar Yudkowsky. Luke Muelhauser, MIRI’s recent
Executive Director (who left last month to join
GiveWell), wrote a let’s-set-the-agenda article further
down the list, titled “Four Focus areas of effective altruism.”
He places MIRI in the third focus area.
MIRI/SIAI tried to “take over” the transhumanist group
HumanityPlus 3.5 years ago, when four SIAI members ran
for H+’s Board. SIAI ran a sordid, pushy, insulting campaign,
bribing voters, accusing opponents of “racism”, deriding
Board members as “freaky… bat-shit crazy [with] broken
reasoning abilities.” MIRI failed in their attempt to
colonize H+, but they’ve successfully wormed their way
into the heart of EA.
A colleague of mine (who asked me not to disclose their
identity) attended the 2014 EA Summit in San Francisco
and afterwards was of the impression that: “MIRI and CFAR
(Center for Applied Rationality) are essentially the “owners”
of EA. EA as a movement has already sold itself in deals
to devils.” This is surely an exaggeration in international
EA, but in the SF Bay Area.. MIRI’s presence within EA
is uncomfortably strong.
====
[*] Transhumanism: there are [at least] ten different
philosophical categories; which one(s) are you?
By Hank Pellissier
Jul 8, 2015
http://ieet.org/index.php/IEET/more/pellissier20150708
Also from RationalWiki, an account of alleged political in-fighting
among the >Hists.
Transhumanism wasn't always as furiously right-wing as it is
now [it wasn't? You could've fooled me!]. A similar colonisation
happened in 2008-2009, when the libertarians moved in and took
over from the more socialist types. From THE POLITICS OF TRANSHUMANISM
AND THE TECHNO-MILLENNIAL IMAGINATION, 1626–2030 by James J. Hughes
(a PDF I have here):
The elective affinity between libertarian politics and Singularity
can be partly explained by the idea of technological inevitability.
Collective agency is not required to ensure the Singularity, and
human governments are too slow and stupid to avert the catastrophic
possibilities of superintelligence, if there are any. Only small
groups of computer scientists working to create the first
superintelligence with core “friendliness code” could have any
effect on deciding between catastrophe and millennium.
This latter project, building a friendly AI, is the focus of
the largest Singularitarian organization, the Singularity Institute
for Artificial Intelligence SIAI), headed by the autodidact
philosopher Eliezer Yudkowsky. In “Millennial Tendencies in Responses
to Apocalyptic Threats” (Hughes 2008), I parse Yudkowky and the
SIAI as the “messianic” version of Singularitarianism, arguing
that their semi-monastic endeavor to build a literal deus ex machina
to protect humanity from the Terminator is a form of magical
thinking. The principal backer of the SIAI is the conservative
Christian transhumanist billionaire Peter Thiel. Like the
Extropians Thiel is an anarcho-capitalist envisioning a
stateless future and funder of the Seasteading Foundation,
which works to create independent floating city-states in
international waters. He also is the principal funder of
the Methuselah Foundation, which works on anti-aging research.
In 2011 and 2012 Thiel was the principal financier of the
SuperPAC backing libertarian Republican Ron Paul, and he
supports other conservative foundations and political
projects on the right.
(continued)
(from THE POLITICS OF TRANSHUMANISM AND THE TECHNO-MILLENNIAL
IMAGINATION, 1626–2030 by James J. Hughes, cont'd)
In 2009 the libertarians and Singularitarians launched a campaign
to take over the World Transhumanist Association Board of Directors,
pushing out the Left in favor of allies like Milton Friedman’s
grandson and Seasteader leader Patri Friedman. Since then the
libertarians and Singularitarians, backed by Thiel’s philanthropy,
have secured extensive hegemony in the transhumanist community.
As the global capitalist system spiraled into the crisis in
which it remains, partly created by the speculation of hedge
fund managers like Thiel, the left-leaning majority of transhumanists
around the world have increasingly seen the contradiction between
the millennialist escapism of the Singularitarians and practical
concerns of ensuring that technological innovation is safe and
its benefits universally enjoyed. While the alliance of Left
and libertarian transhumanists held together until 2008 in the
belief that the new biopolitical alignments were as important
as the older alignments around political economy, the global
economic crisis has given new life to the technoprogressive
tendency, those who want to organize for a more egalitarian
world and transhumanist technologies, a project with a long
Enlightenment pedigree and distinctly millenarian possibilities.
In surveys I conducted in 2003, 2005, and 2007 of the global
membership of the World Transhumanist Association, left-wing
transhumanists outnumbered conservative and libertarian
transhumanists 2-to-1 (Humanity+ 2008). By 2007 16 percent
of respondents specifically self-identified as “technoprogressive.”
James Hughes has forgotten more about democratic politics in his panic at the prospect of personal death than many self-identified lefties will ever know. In respect to the specific arguments to which you refer here, Hughes cooks the books fairly transparently in these surveys to get his desired results. He allows all sorts of kooky futurological neoligisms like "upwinger" and "dynamist" into his political IDs and then treats them as "beyond left and right" even when they are demonstrably neoliberal, market fundamentalist, and corporatist-right -- thus making all sorts of actually reactionary factions vanish from being accounted as such. He also disregards all sorts of right-wing eugenic and Bell Curve white supremacist politics in making his calculations. (Given his own weakness for eugenic arguments this isn't exactly surprising.) I mean, sure Haldane and Sanger were eugenicist, but that doesn't mean the eugenic dimensions of their viewpoints were legibly left even if their avowed politics were overall, nor certainly does it mean that one can STILL be legibly left while holding such views given all that we now understand about them. Given that Hughes is presumably providing a sophisticated analysis of transhumanist political entailments in this very piece, it is interesting that he doesn't really go into questions of transhumanist subcultures as essentially gizmo-fashion-fandoms embedded in consumer lifestyle politics beholden to exploitative and unsustainable practices, he doesn't go into the susceptibility of techno-determinist or techno-autonomist understandings of history to engender anti-democratic acquiescence to elites and circumvention of democracy by technocrats, he doesn't question his own willingness to make common cause AS a transhumanist with right-wing transhumanists in what he fancies is a generalized "pro-technology" politics as if all technology is the same when that is obviously a mystification (most useful to incumbent elites, hence, again a reactionary politics), pretending the politics of technoscientific change inheres in "tech specs" rather than in the political struggles to ensure the costs, risks, and benefits of change are equitably distributed to all the stakeholders to change by their lights (denial of which, yet again, is reactionary). As a key figure in the original formulation and popularization of that term "technoprogressive" I am keen to point out that its use is hardly evidence that one is in the presence of a person who is technoscientifically-literate or legibly progressive -- over many years I've repeatedly learned that the hard way! These days, it's not even bad-faith "democratic transhumanists" who are making widest recourse to the technoprogressive term, but techbro venture capitalists rationalizing skim-and-scam operations, often declaring their facile frauds as effective altruism in the bargain -- primping for camera time with patently ridiculous talk of robocalypse and bitcoin rapture.
(via http://hplusmagazine.com/2015/12/29/29450/ )
http://hiimpact.blogspot.com/2015/12/an-open-letter-to-transhumanist-movement.html
-------------
Hi-Impact
Musings and stuff from an armchair futurologist, sci-fi addict
and furry
An Open Letter to the Transhumanist Movement
Tuesday, December 1, 2015
Maybe it’s the casual ones that think themselves better than
everyone else just because they think they see the shape of
technology ahead of the curve. Or maybe it’s the Silicon Valley
tech types who unironically think that lower economic classes
aren’t deserving of the same rights given to them. Regardless
of exactly who it is, perhaps a modern Shakespeare would
write that something is rotten in the state of transhumanism. . .
No doubt I’ve seen quite a few disturbing things from other
transhumanists; but I won’t go into a laundry list of them
because the main theme boils down to this: “I’m better than
everyone else, so screw everyone else”. . .
I suspect a few factors that play into the current narrative
of “got mine, to heck with you” in transhumanism, but I’m not
going to finger point, not now. . . [Oh, what a disappointment ;-> ]
[T]he core ideals of transhumanism and the extraneous baggage
it has acquired are fundamentally at odds with one another.
Transhumanism was supposed to be about improving ourselves, to
become less like apes and more like angels if you will. But
now transhumanism has been transformed (dare I say hijacked?)
by people who revel in being the ape. People who, despite
their ideas on technology and existence, still practice the
power struggles and smug sense of self-superiority that’s been
with us since time immemorial. In other words: to them it’s
less like becoming better and more like being the same person
inside a computer. . .
====
May I recommend another open letter?
It seems to me that transhumanism is about fear and loathing of the aging, vulnerable, error-prone body and brain and functions primarily to rationalize plutocracy (gizmo fandom is freedom, technocratic elites know best). Anybody who thinks "transhumanism" came up with the idea that self-improvement is nice or thinks "transhumanists" have provided any original contributions to self-improvement as actual practice will find themselves improved by having their head examined.
> [P]erhaps a modern Shakespeare would write that
> something is rotten in the state of transhumanism. . .
A friend of mine, off from work between the holidays
and with the rest of his family out of the house for
the duration, invited me over this past weekend to
binge-watch movies on his big HDTV. We started with
the recent SyFy _Childhood's End_ and ultimately
graduated to "serious" dramas (_The Master_, _The
Curious Case of Benjamin Button_, _Doubt_), but in
between he showed me three of his favorite Marvel
superhero movie adaptations -- the first installment
each of _Thor_, _Captain America_, and _The Avengers_.
I hadn't seen any of these before -- I don't keep up
with comic-book movie adaptations. I enjoyed them well
enough, and my friend is a more-or-less sophisticated consumer
of these things (he's pushing 60; he's no 12-year-old).
But in the context of my past almost-20-years' exposure to the
on-line transhumanists, I now find this sort of entertainment
disturbing on several levels. The feeding of adolescent-male
narcissistic power fantasies (however perfumed with ostensible "altruistic"
motivations in the diegesis -- the interior story line),
the militarism, and the atmosphere of American exceptionalism
are certainly bothersome, but what I find most irritating
these days is my certain knowledge that **some** people, of
whatever age (physical or mental), absorb these fantasies as though
they constituted a real paradigm for "the future". All these
thoughts were hovering in the back of my mind even as I was still
appreciating the movies at a 12-year-old's level. Afterwards,
I mentioned these reservations to my friend, and he
acknowledged them rather perfunctorily, but I'm afraid he
doesn't "bellyfeel" them as much as I do at this stage
of my life. ;->
Post a Comment