Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Tuesday, December 22, 2015

Divided Government After the Great Sort

Expanded and upgraded from the Moot:

Most representative constitutional governments established in the aftermath of our own experiment in the United States have eschewed those idiosyncrasies of our system owing to the Founders' facile anti-partisan fetish and implemented parliamentary systems instead -- and very much to their benefit for the most part.

Basic administrative functions (like raising the debt ceiling, filling key posts in a timely way) should be professionalized. The Senate Leader and House Speaker should be of the party of the Executive, and (if necessary, multiparty, multifaction) coalitions should form to support the implementation of the policy platform in the service of which the Executive are elected, else the government has no confidence. Of course, here in the United States, none of this is likely ever to be.

Given our present thoroughly institutionalized party duopoly, it is unclear that the organized and by now thoroughly anti-democratic force of the GOP can be sufficiently marginalized even in a conspicuously diversifying, secularizing, planetizing polyculture to be circumvented in a sufficiently timely and sustained way for majorities seeking to address urgent and obvious common problems -- socioeconomic precarity, climate and pandemic catastrophe, global conflicts exacerbated by global trafficking in military weapons, any one of which threaten the struggle for civilization (which I define as sustainable equity-in-diversity) and in combination threaten still worse.

"Divided Government" is dysfunctional, depressive of participation, and confuses the necessary of assignment of responsibility for policy outcomes. It seems to me that the various Golden Ages of bipartisan co-operation celebrated by Village pundits were mostly periods in which the great evil of the slave-holding and then segregated South were marginalized through their distribution into and management by both parties -- a strategy that never worked well (and could prevent neither a Civil War to resolve the question of slavery nor the betrayal of Reconstruction in the establishment of Jim Crow) and has worked ever less well during the generational "Great Sort" of the Parties in respect to white supremacy from the New Deal coalition through the Civil Rights era to the Southern Strategy and the descent into the Summers of Tea and the Winter of Trump.

It seems to me that the Founders' celebration of a hyper-individualist conception of "public happiness" informed by the specificity of their experience of Revolutionary politics undermined their appreciation of forms of other more democratic dimensions of public happiness connected to assembly, administration, organization, loyalty. (The guardian angel of this blog, Hannah Arendt, the phenomenologist of political power, elaborated the Founders' experience better than anyone, and perhaps shares some of their blind spots.) Their abstract commitments were implemented in Constitutional doctrines that have articulated progressive historical struggles in the United States. The Founders were wrong and we're stuck with their mistake.

And we ARE stuck with it: much like quixotic third-party fantasies, in which the politics to create a viable third party to solve certain very real pathologies of our duopoly are harder to achieve than to solve those problems through and in spite of the duopoly, so too the politics to create a parliamentary system to solve certain very real pathologies of the anti-factionalist quirks of our Constitution are harder to achieve than to solve those problems through and in spite of the quirks of our anti-factionalist Constitution.

Saturday, December 19, 2015

Robocultic Q & A With a Tech Journalist

Background:

Last month I spent a few weeks in correspondence with an interesting writer and occasional journalist who stumbled upon some transhumanist sub(cult)ures and wanted to publish an expose in a fairly high-profile tech publication. She is a congenial and informed and funny person, and I have no doubt she could easily write a piece about futurology quite as excoriating as the sort I do, but probably in a more accessible way than my own writing provides. I was rather hoping she would write something like that -- and I suspect she had drafts that managed the trick -- but the published result was a puff-piece, human interest narratives of a handful of zany robocultic personalities, that sort of thing, and ended up being a much more promotional than critical engagement, with a slight undercurrent of snark suggesting she wasn't falling for the moonshine without saying why exactly or why it might matter. I'm not linking to the piece or naming my interlocutor because, as I said, I still rather like her, and by now I can't say that I am particularly surprised at the rather lame product eventuating from our (and her other) conversations. She is a fine writer but I don't think there is much of an appetite for real political or cultural criticism of futurological discourse in pop-tech circles, at any rate when it doesn't take the form of making fun of nerdy nerds or indulging in disasterbatory hyperbole.

The transhumanists, singularitarians, techno-immortalists, digi-utopians, geo-engineers and other assorted futurological nuts I corral under the parodic designation "Robot Cultists" remain sufficiently dedicated to their far-out viewpoints that they do still continue to attract regular attention from journalists and the occasional academic looking for a bit of tech drama or tech kink to spout about. I actually think the robocultic sub(cult)ure is past its cultural heyday, but its dwindling number of stale, pale, male enthusiasts has been more than compensated lately by the inordinate amount of high-profile "tech" billionaires who now espouse aspects of the worldview in ways that seem to threaten to have Implications, or at least make money slosh around in ways it might not otherwise do.

Anyway, as somebody who has been critiquing and ridiculing these views in public places for over a quarter century I, too, attract more attention than I probably deserve from journalists and critics who stumble upon the futurological freakshow and feel like reacting to it. For the last decade or so I have had extended exchanges with two or three writers a year, on average, all of whom have decided to do some sort of piece or even a book about the transhumanists. For these futurologically-fascinated aficionados I inevitably provide reading lists, contacts, enormous amounts of historical context, ramifying mappings of intellectual and institutional affiliation, potted responses to the various futurological pathologies they happen to have glommed onto, more or less offering an unpaid seminar in reactionary futurist discourse.

Articles do eventually appear sometimes. In them I am sometimes a ghostly presence, offering up a bit of decontextualized snark untethered to an argument or context to give it much in the way of rhetorical force. But far more often the resulting pieces of writing neither mention me nor reflect much of an engagement with my arguments. As a writer really too polemical for academia and too academic for popular consumption, I can't say that this result is so surprising. However, lately I have made a practice of keeping my side of these exchanges handy so that I can at least post parts of them to my blog to see the light of day. What follows is some comparatively pithy Q & A from the latest episode of this sort of thing, edited, as it were, to protect those who would probably prefer to remain nameless in this context:

Q & A:

Q: What do you think the key moral objections are to transhumanism?

Well, I try not to get drawn into discussions with futurists about whether living as an immortal upload in the Holodeck or being "enhanced" into a sexy comic book superhero body or being ruled by a superintelligent AI would be "good" or "bad." None of these outcomes are going to arrive to be good or bad anyway, none of the assumptions on which these prophetic dreams are based are even coherent, really, so the moral question (or perhaps this is a more a question for a therapist) should probably be more like -- Is it good or bad to be devoting time to these questions rather than to problems and possibilities that actually beset us? What kind of work is getting done for the folks who give themselves over to infantile wish-fulfillment fantasizing on these topics? Does any of this make people better able to cope with shared problems or more attuned to real needs or more open to possibilities for insight or growth?

You know, in speculative literature the best imaginative and provocative visions have some of the same sort of furniture in them you find in futurological scenarios -- intelligent artifacts, powerful mutants, miraculous abilities -- but as in all great literature, their strangeness provides the distance or slippage that enables us to think more critically about ourselves, to find our way to sympathetic identification with what might otherwise seem threatening alienness, to overcome prejudices and orthodoxies that close us off to hearing the unexpected that changes things for the better. Science fiction in my view isn't actually about predictions at all, or it is only incidentally so: it is prophetic because finds the open futurity in the present world, it builds community from the strangenessand promise in our shared differences.

But futurism and tech-talk isn't prophetic in this sense at all, when you consider it more closely -- it operates much more like advertising does, promising us easy money, eternal youth, technofixes to end our insecurities, shiny cars, skin kreme, boner pills. The Future of the futurists is stuck in the parochial present like a gnat in amber. It freezes us in our present prejudices and fears, and peddles an amplification of the status quo as "disruption," stasis as "accelerating change." Futurology promises to "enhance" you -- but makes sure you don't ask the critical questions: enhances according to whom? for what ends? at what costs? Futurology promises you a life that doesn't end -- but makes sure you don't ask the critical questions: what makes a life worth living? what is my responsibility in the lives of others with whom I share this place and this moment? Futurology promises you intelligent gizmos -- but makes sure you don't ask the critical questions: if I call a computer or a car "intelligent," how does that change what it means to call a human being or a great ape or a whale intelligent? what happens to my sense of the intelligence lived in bodies and incarnated in historical struggles if I start "recognizing" it in landfill-destined consumer devices? I think the urgent moral questions for futurologists have less to do with their cartoonish predictions but with the morality of thinking futurologically at all, rather than thinking about real justice politically and real meaning ethically and real problems pragmatically.

Q: Why do you think climate change denial is so rife among this movement?

Many futurologists like to declare themselves to be environmentalists, so this is actually a tricky question. I think it might be better to say futurism is about the displacement rather than the outright denial of catastrophic anthropogenic climate change. For example, you have futurists like Nick Bostrom and Elon Musk who will claim to take climate change seriously but then who will insist that the more urgent "existential risk" humans face is artificial superintelligence. As climate refugees throng tent-cities and waters flood coastal cities and fires rage across states and pandemic disease vectors shift with rising temperatures these Very Serious futurological pundits offer up shrill warnings of Robocalypse.

Since the birth of computer science, generation after generation after generation, its intellectual luminaries have been offering up cocksure predictions about the imminence of world changing artificial intelligence, and they have never been anything but completely wrong about that. Isn't that rather amazing? The fact is that we have little scientific purchase on the nature of human intelligence and the curiously sociopathic body-alienated models of "intelligence" that suffuse AI-enthusiast subcultures don't contribute much to that understanding -- although they do seem content to code lots of software that helps corporate-military elites treat actually intelligent human beings as if we were merely robots ourselves.

Before we get to climate change denial, then, I think there are deeper denialisms playing out in futurological sub(cult)ures -- a terrified denial of the change that bedevils the best plans of our intelligence, a disgusted denial of the aging, vulnerable, limited, mortal body that is the seat of our intelligence, a horrified denial of the errors and miscommunications and humiliations that accompany the social play of our intelligence in the world. Many futurists who insist they are environmentalists like to talk about glorious imaginary "smart" cities or give PowerPoint presentations about geo-engineering "technofixes" to environmental problems in which profitable industrial corporate-military behemoths save us from the destruction they themselves have caused in their historical quest for profits. The futurists talk about fleets of airships squirting aerosols into the atmosphere, dumping megatons of filings into the seas, building cathedrals of pipes to cool surface temperatures with the deep sea chill, constructing vast archipelagos of mirrors in orbit to reflect the sun's rays -- and while they are hyperventilating these mega-engineeering wet-dreams they always insist that politics have failed, that we need a Plan B, that our collective will is unequal to the task. Of course, this is just another variation of the moral question you asked already. None of these boondoggle fantasies will ever be built to succeed or fail in the first place, there is little point in dwelling on the fact that we lack the understanding of eco-systemic dynamics to know whether the impacts of such pharaohnic super-projects would be more catastrophic than not, the whole point of these exercises is to distract the minds of those who are beginning to grasp the reality of our shared environmental responsibilities from the work of education, organization, agitation, legislation, investment that can be equal to this reality. Here, the futurological disgust with and denial of bodies, embodied intelligence, becomes denial of the material substance of political change, of historical struggle, bodies testifying to violation and to hope, assembled in protest and in collaboration.

Many people have been outraged recently to discover that Exxon scientists have known the truth about their role in climate catastrophe for decades and lied about it to protect their profits. But how many people are outraged that just a couple of years ago Exxon-Mobile CEO Rex Tillerson declared that climate change is simply a logistical and engineering problem? This is the quintessential form that futurological climate-change displacement/denialism takes: it begins with an apparent concession of the reality of the problem and then trivializes it. Futurology displaces the political reality of crisis -- who suffers climate change impacts? who dies? who pays for the mitigation efforts? who regulates these efforts? who is accountable to whom and for what? who is most at risk? who benefits and who profits from all this change? -- into apparently "neutral" technical and engineering language. Once this happens the demands and needs diversity of the stakeholders to change vanish and the technicians and wonks appear, white faces holding white papers enabling white profits.

Q: What are the most obvious historical antecedents to this kind of thinking?

Futurological dreams and nightmares are supposed to inhabit the bleeding edge, but the truth is that their psychological force and intuitive plausibility draws on a deeply disseminated archive of hopes and tropes... Eden, Golem, Faust, Frankenstein, Excaliber, Love Potions, the Sorcerer's Apprentice, the Ring of Power, the Genie in a Bottle, the Fountain of Youth, Rapture, Apocalypse and on and on and on.

In their cheerleading for superintelligent AI, superpowers/techno-immortalism, and digi-nano-superabundance it isn't hard to discern the contours of the omni-predicates of centuries of theology, omniscience, omnipotence, omnibenevolence. Patriarchal priests and boys with their toys have always marched through history hand in hand. And although many futurologists like to make a spectacle of their stolid scientism it isn't hard to discern the old fashioned mind-body dualism in their digital-utopian virtuality uploading fantasies. Part of what it really means to be a materialist is to take materiality seriously, which means recognizing that information is always instantiated on a non-negligible material carrier, which means it actually matters that all the intelligence we know as such as yet has been biologically incarnated. There is a difference that should make a difference to a materialist in the aria sung in the auditorium, heard on vinyl, pulled up on .mp3. Maybe something like intelligence can be materialized otherwise, but will it mean all that intelligence means to us in an imaginative, empathetic, responsible, rights-bearing being sharing our world? And if it doesn't is "intelligence" really the word we should use or imagine using to describe it?  

Fascination with artifacts that seem invested with spirit -- puppets, carnival automata, sex-dolls are as old or older than written history. And of course techno-fetishism, techno-reductionism, and techno-triumphalism has been with us since before the Treaty of Westphalia ushered in the nation-state modernity that has preoccupied our attention with culture wars in the form of les querelles des anciens et des modernes right up to our late modern a-modern post-modern post-post-modern present: big guns and manifest destinies, eugenic rages for order, deaths of god and becoming as gods, these are all old stories. The endless recycling of futurological This! Changes! Everything! headlines about vat-grown meat and intelligent computers and cost-free fusion and cures for aging every few years or so is the consumer-capitalist froth on the surface of a brew of centuries old techno-utopian loose-talk and wish-fulfillment fantasizing.

Q: Why should people be worried about who is pushing these ideas?

Of course, all of this stuff is ridiculous and narcissistic and technoscientifically illiterate and all too easy to ignore or deride... and I do my share of that derision, I'll admit that. But you need only remember the example of the decades long marginalized Neoconservative foreign-policy "Thought Leaders" to understand the danger represented by tech billionaires and their celebrants making profitable promises and warnings about super-AI and immortality-meds and eco escape hatches to Mars. A completely discredited klatch of kooks who fancy themselves the Smartest Guys in the Room can cling to their definitive delusions for a long time -- especially if the nonsense they spew happens to bolster the egos or rationalize the profits of very rich people who want to remain rich above all else. And eventually such people can seize the policy making apparatus long enough to do real damage in the world.

For over a generation the United States has decided to worship as secular gods a motley assortment of very lucky, rather monomaniacal, somewhat sociopathic tech venture capitalists few of whom every actually made anything but many of whom profitably monetized (skimmed) the collective accomplishments of nameless enthusiasts and most of whom profitably marketed (scammed) gizmos already available and usually discarded elsewhere as revolutionary novelties. The futurologists provide a language in which these skim and scam operators can reassure themselves that they are Protagonists of History, shepherding consumer-sheeple to techno-transcendent paradise and even godlikeness. It is a mistake to dismiss the threat represented by such associations -- and I must say that in the decades I have been studying and criticizing futurologists they have only gained in funding, institutional gravity, and reputational heft, however many times their animating claims have been exposed and pernicious nonsense reviled.

But setting those very real worries aside, I also think the futurologists are interesting objects and subjects of study because they represent a kind of reductio ad absurdum of prevailing attitudes and assumptions and aspirations and justificatory rhetoric in neoliberal, extractive-industrial, consumer-oriented, marketing-suffused, corporate-military society: if you can grasp the desperation, derangement and denialism of futurological fancies, it should put you in a better position to grasp the pathologies of more mainstream orthodoxies in our public discourse and authorizing institutions, our acquiescence to unsustainable consumption, our faith in technoscientific, especially military, circumventions of our intractable political problems, our narcissistic insistence that we occupy a summit from which to declare differences to be inferiorities, our desperate denial of aging, disease, and death and the death-dealing mistreatment of others and of ourselves this denialism traps us in so deeply.

Q (rather later): [O]ne more thing: who were the most prominent members of the extropians list? Anyone I've missed? Were R.U Sirius or other Wired/BoingBoing writers and editors on the list? Or engineers/developers etc?

Back in Atlanta in the 1990s, I read the Extropy zine as a life-long SF queergeek drawn to what I thought were the edges of things, I suppose, and I was a lurker on the extropians list in something like its heyday. This was somewhere in the '93-'99 range, I'm guessing. I posted only occasionally since even then most of what I had to say was critical -- the philosophy seemed like amateur hour and the politics were just atrocious -- and it seemed a bit wrong to barge into their clubhouse and piss in the punch bowl if you know what I mean... I was mostly quiet.

The posters I remember as prominent were Max and Natasha-Vita More, of course, Eliezer Yudkowsky, Damien Broderick (an Australian SF writer), Eugen Leitl, Perry Metzger, Hal Finney, Sasha Chislenko, Mark Plus, Giulio Prisco, Ramona Machado, Nancy Lebovitz… You know, people tend to forget the women's voices because it was such an insistently white techbro kinda sorta milieu. I'm not sure how many women stuck with it, although Natasha is definitely a piece of work, and Romana was doing something of a proto Rachel Haywire catsuited contrarian schtick, Haywire's a more millennial transhumanoid who wasn't around back then. Let's see. There was David Krieger too (I made out with him at an extropian pool party in the Valley of the Silly Con back in 95, I do believe).

I don't think I remember RU Sirius ever chiming in, I personally see him as more of an opportunistic participant/observer/stand-up critic type, really, and I know I remember Nick Szabo's name but I'm not sure I remember him posting a lot. You mentioned Eric Drexler, but I don't remember him posting, he was occasionally discussed and I know he would appear at futurist topic conferences with transhumanoid muckety mucks like More and the cypherpunks like Tim May and Paul Hughes. I do remember seeing Christine Peterson a couple of times.

Wired did a cover story called "Meet The Extropians" which captures well some of the flavor of the group, that was from 1993. Back then, I think techno-immortalism via cryonics and nanobot miracle medicine was the big draw (Aubrey de Grey appeared a bit later, I believe, but the sub(cult)ure was ready for him for sure), with a weird overlap of space stuff that was a vestige from the L5 society and also a curious amount of gun-nuttery attached to the anarcho-capitalist enthusiasm and crypto-anarchy stuff.

It's no surprise that bitcoinsanity had its birth there, and that the big bucks for transhumanoid/ singularitarian faith-based initiatives would come from PayPal billionaires like the terminally awful robocultic reactionary Peter Thiel, given the crypto-currency enthusiasm. Hal Finny was a regular poster at extropians and quite a bitcoin muckety muck right at the beginning -- I think maybe he made the first bitcoin transaction in fact.

Back in those days I was working through connections of technnocultural theory and queer theory in an analytic philosophy department in Georgia, and the extropians -- No death! No taxes! -- seemed to epitomize the California Ideology. I came to California as a Queer National with my mind on fire to work with Judith Butler, and I was lucky enough to spend a decade learning from her in the Rhetoric Department at Berkeley, where I ended up writing my diss about privacy and publicity in neoliberal technocultures, Pancryptics. But I never lost sight of the transhumanists -- they seemed and still seem to me to symptomize in a clarifying extreme form the pathologies of our techno-fetishistic, techno-reductionist, techno-triumphalist disaster capitalism. Hope that helps!

Q (much later): Tackling this thing has been a lot more difficult than I imagined it would be. Right now it's sitting on 20,000 words and has to come down to at least half that (pity my editor!). I've gone through quite a journey on it. I still think very much that these ideas are bad and a reflection of a particularly self-obsessed larger moment, and that people should be extremely concerned about how much money is going into these ideas that could be so much better spent elsewhere. The bizarre streak of climate denialism is likewise incredibly disturbing…. But then I kind of came around in a way to sympathising with what is ultimately their fear which is driving some of this, an incredibly juvenile fear of dying. But a fear of being old and infirm and in mental decline in a society that is in denial about the realities of that, and which poses few alternatives to that fate for all of us, in a way I can understand that fear…. In any case, amazing that they let you proof read [their official FAQ] for them, even though you are so critical of their project! Or do you think they were just grateful for someone who could make it read-well on a sentence level?

You have my sympathies, the topic is a hydra-headed beast when you really dig in, I know. Nick Bostrom and I had a long phone conversation in which I leveled all sorts of criticisms of transhumanism. That I was a critic was well known, but back then socialist transhumanist James Hughes (who co-founded IEET with him) and I were quite friendly, and briefly I was even "Human Rights" fellow at IEET myself -- which meant that they re-published some blog posts of mine. (I write about that and its rather uncongenial end here.) Anyway, Bostrom and I had a wide-ranging conversation that took his freshly written FAQ as our shared point of departure. He adapted/qualified many claims in light of my criticisms, but ignored a lot of them as well and of course the central contentions of the critique couldn't be taken up without, you know, giving up on transhumanism. As a matter of fact, we didn't get past the first half of the thing. It was a good conversation though, I remember it was even rather fun. I do take these issues seriously as you know and, hell, I'll talk to anybody who is going to listen in a real way.

You know, I've been criticizing futurism for decades -- there were times when I was one of the few people truly informed of their ideas even if I was critical of them, and some of them appreciated the chance to sharpen their arguments on a critic. I've had many affable conversations with all sorts of these folks, Aubrey de Grey, Robin Hanson, Max More even. The discourse is dangerous and even evil in my opinion, but its advocates are human beings which usually means conversations can happen face to face.

I know what you mean when you say you sympathize after a fashion upon grasping the real fear of mortality driving so much of their project -- and I would say also the fear of the uncontrollable role of chance in life, the vulnerability to error and miscommunication in company. But you know reactionary politics are always driven by fear -- and fear is always sad. I mean, the choices are love or fear when it comes down to it, right? And to be driven by fear drives away so much openness to love and there's no way to respond to that but to see the sadness of it -- when it comes to it these fears are deranging sensible deliberation about technoscientific change at a historical moment when sense is urgently needed, these fears make them dupes, and often willing ones, of plutocratic and death-dealing elites, these fears lead them to deceive themselves and deceive others who are also vulnerable. One has to be clear-headed about such things, seems to me.

Q (still later): Have entered new phase: What if the Extropians were just a Discordian-type joke that other people came to take seriously?

Yes, they're a joke. But it's on us, and they aren't in on it. As I mentioned before, the better analogy is the Neocons: they were seen as peddlers of nonsense from the perspective of foreign policy professionals (even most conservatives thought so) but they were well-funded because their arguments were consoling and potentially lucrative to moneyed elites and eventually they stumbled into power via Bush and Cheney whereupon they implemented their ideas with predictable (and predicted) catastrophic consequences in wasted lives and wasted wealth. To be clear: the danger isn't that transhumanoids will code a Robot God or create a ruler species of immortal rich dudes with comic-book sooper-powers, but that they will divert budgets and legislation into damaging policies and dead ends that contribute to neglected health care, dumb and dangerous software, algorithmic harassment and manipulation, ongoing climate catastrophe, the looting of public and common goods via "disruptive" privatization, exploitative "development," cruel "resilience," and upward-failing techbro "Thought Leadership."

Sunday, December 06, 2015

Three Fronts in the Uploading Discussion -- A Guest Post by Jim Fehlinger

Longtime friend and friend-of-blog Jim Fehlinger posted a cogent summarizing judgment (which doesn't mean concluding by any means) of the Uploading discussion that's been playing out in this Moot non-stop for days. I thought it deserved a post of its own. In the Moot to this post, I've re-posted my responses to his original comments to get the ball rolling again. I've edited it only a very little, for continuity's sake, but the link above will take the wary to the originals.--d

It strikes me that this conversation (/disagreement) has been proceeding along three different fronts (with, perhaps, three different viewpoints) that have not yet been clearly distinguished:

1. Belief in/doubts about GOFAI ("Good Old-Fashioned AI") -- the 50's/60's Allen Newell/Herbert Simon/Seymour Papert/John McCarthy/Marvin Minsky et al. project to replicate an abstract human "mind" (or salient aspects of one, such as natural-language understanding) by performing syntactical manipulations of symbolic representations of the world using digital computers. The hope initially attached to this approach to AI has been fading for decades. Almost a quarter of a century ago, in the second edition of his book, Hubert Dreyfus called GOFAI a "degenerating research program":
Almost half a century ago [as of 1992] computer pioneer Alan Turing suggested that a high-speed digital computer, programmed with rules and facts, might exhibit intelligent behavior. Thus was born the field later called artificial intelligence (AI). After fifty years of effort [make it 70, now], however, it is now clear to all but a few diehards that this attempt to produce artificial intelligence has failed. This failure does not mean this sort of AI is impossible; no one has been able to come up with a negative proof. Rather, it has turned out that, for the time being at least, the research program based on the assumption that human beings produce intelligence using facts and rules has reached a dead end, and there is no reason to think it could ever succeed. Indeed, what John Haugeland has called Good Old-Fashioned AI (GOFAI) is a paradigm case of what philosophers of science call a degenerating research program.

A degenerating research program, as defined by Imre Lakatos, is a scientific enterprise that starts out with great promise, offering a new approach that leads to impressive results in a limited domain. Almost inevitably researchers will want to try to apply the approach more broadly, starting with problems that are in some way similar to the original one. As long as it succeeds, the research program expands and attracts followers. If, however, researchers start encountering unexpected but important phenomena that consistently resist the new techniques, the program will stagnate, and researchers will abandon it as soon as a progressive alternative approach becomes available.
[That research program i]s still degenerating, as far as I know.

Dale and I agree in our skepticism about this one. Gareth Nelson, it would seem (and many if not most & Hists, I expect) still holds out hope here. I think it's a common failing of computer programmers. Too close to their own toys, as I said before. ;-&

2. The notion that, even if we jettison the functionalist/cognitivist/symbol-manipulation approach of GOFAI, we still might simulate the low-level dynamic messiness of a biological brain and get to AI from the bottom up instead of the top down. Like Gerald Edelman's series of "Darwin" robots or, at an even lower and putatively more biologically-accurate level, Henry Markram's "Blue Brain" project.

Gareth seems to be on-board with this approach as well, and says somewhere above that he thinks a hybrid of the biological-simulation approach and the GOFAI approach might be the ticket to AI (or AGI, as Ben Goertzel prefers to call it).

Dale still dismisses this, saying that a "model" of a human mind is not the same as a human mind, just as a picture of you is not you.

I am less willing to dismiss this on purely philosophical grounds. I am willing to concede that if there were digital computers fast enough and with enough storage to simulate biological mechanisms at whatever level of detail turned out to be necessary (which is something we don't know yet) and if this sufficiently-detailed digital simulation could be connected either to a living body with equally-miraculously (by today's standards) fine-grained sensors and transducers, or to a (sufficiently fine-grained) simulation of a human body immersed in a (sufficiently fine-grained) simulation of the real word -- we're stacking technological miracle upon technological miracle here! -- then yes, this hybrid entity with a human body and a digitally-simulated brain, I am willing to grant, might be a good-enough approximation of a human being (though hardly "indistinguishable" from an ordinary human being, and the poor guy would certainly find verself playing a very odd role indeed in human society, if ve were the first one). I'm even willing to concede (piling more miracles on top of miracles by granting the existence of those super-duper-nanobots) the possibility of "uploading" a particular human personality, with memories intact, using something like the Moravec transfer (though again, the "upload" would find verself in extremely different circumstances from the original, immediately upon awakening). This is still not "modelling" in any ordinary sense of the word in which it occurs in contemporary scientific practice! It's an as-yet-unrealized (except in the fictional realm of the SF novel) substitution of a digitally-simulated phenomenon for the phenomenon itself (currently unrealized, that is, except in the comparatively trivial case in which the phenomenon is an abstract description of another digital computer).

However, I am unpersuaded, Moravec and Kurzweil and their fellow-travellers notwithstanding, that Moore's Law and the "acceleration of technology" are going to make this a sure thing by 2045. I am not even persuaded that we know enough to be able to predict that such a thing might happen by 20450, or 204500, whether by means of digital computers or any other technology, assuming a technological civilization still exists on this planet by then.

The physicist Richard C. Feynman, credited as one of the inventors of the idea of "nanotechnology", is quoted as having said "There's plenty of room at the bottom." Maybe there is. Hugo de Garis thinks we'll be computing using subatomic particles in the not too distant future! If they're right, then -- sure, maybe all of the above science-fictional scenarios are plausible. But others have suggested that maybe, just maybe, life itself is as close to the bottom as our universe permits when it comes to, well, life-like systems (including biologically-based intelligence). If that's so, then maybe we're stuck with systems that look more-or-less like naturally-evolved biochemistry.

3. Attitudes toward the whole Transhumanist/Singularitarian mishegas. What Richard L. Jones once called the "belief package", or what Dale commonly refers to as the three "omni-predicates" of & Hist discourse: omniscience=superintelligence; omnipotence=super-enhancements (including super-longevity); omnibenevolence=superabundance.

This is a very large topic indeed. It has to do with politics, mainly the politics of libertarianism (Paulina Boorsook, Cyberselfish, Barbrook & Cameron, "The Californian Ideology," religious yearnings (the "Rapture of the Nerds"), cult formation (especially sci-fi tinged cults, such as Ayn Rand's [or Nathaniel Branden's, if you prefer] "Objectivism", L. Ron Hubbard's "Scientology", or even Joseph Smith's Mormonism!), psychology (including narcissism and psychopathy/sociopathy), and other general subjects. Very broad indeed!

Forgive me for putting it this insultingly, but I fear Gareth may still be savoring the Kool-Aid here.

Dale and I are long past this phase, though we once both participated on the Extropians' mailing list, around or before the turn of the century. When we get snotty (sometimes reflexively so ;-&), it's the taste of the Kool-Aid we're reacting to, which we no longer enjoy, I'm afraid.

Friday, December 04, 2015

The Immaterialism of Futurological Materialism

Upgraded and adapted from the still-ongoing Moot. In response to robocultist (he affirms the designation, that isn't just name-calling) Gareth's repeated declarations to the effect that "the internal [by which he means the actually-existing material, that is to say, biological] implementation [of real-world intelligence actually in evidence] does not matter" I quipped: "Some materialist you turned out to be."

Whereupon he reacted, with robotic predictability:
Bit of a non sequitur that. I say the internal implementation does not matter so long as the external behaviour still yields intelligence, in what way does that contradict materialism? If anything, claiming that it matters whether there's neurons or silicon chips implementing intelligent behaviour is claiming there's something important about neurons that goes beyond their material behaviour.
An actual materialist should grasp that the actually-existing material incarnation of minds, like the actually-existing material carrier of information, is non-negligible to the mind, to the information. The glib treatment of material differences as matters of utter indifference, as perfectly inter-translatable without loss, as cheerfully dispensable is hardly the attitude of a materialist. One might with better justice describe the attitude as immaterialist.

Once again, you airily refer to "silicon chips implementing intelligent behavior" when that has never once happened and looks nothing like something about to happen and the very possibility of which is central to the present dispute. However invigorating the image of this AI is in your mind -- it is not real, nor is it a falsifiable thought-experiment, nor is it a destiny, nor is it a burning bush, nor is it writing on a wall, and those of us who fail to be moved as you are by this futurological fancy are not denying reality, its stipulated properties -- however fervently asserted by its futurological fanboys -- are not facts in evidence. In response to this charge you will deny, as you have done every other time I have made it, that you are in fact claiming AI is "real" or would be "easy" -- but time after time after time you conjure up these fancies in making your rhetorical case and attribute properties to them with which skeptics presumably have to deal, just because you want them to be true so fervently. Just as well argue how many angels can dance on a pin head.

And then, too, once again, in this formulation you insinuate my recognition that such real-world intelligence that actually exists all happens to be materialized in biological organization amounts to positing something magical or supernatural about brains. No, Gareth: the intelligence that exists is biological and the artificial intelligence to which you attribute all sorts of pet properties does not exist. To entertain the logical possibility that phenomena legible to us as intelligent might be materialized otherwise does not mean that they are, that we can engineer them, or that we know enough about the intelligence we materially encounter to be of any help were we to want to engineer intelligence otherwise. None of that is implied in the realization that there is no reason to treat intelligence of somehow supernatural. None of it. You may need to have a good cry in your pillow for a moment after that sinks in before we continue. It's fine, I'll wait.

Now, again, a "materialism" about mind demands recognition that the materialization of such minds as are in evidence is biological. That intelligence could be materialized otherwise is possible, but not necessarily plausible, affordable, or even useful. Maybe it would be, maybe not. Faith-based techno-transcendental investment of AI with wish-fulfillment fantasies of an overcoming of the scary force of contingency in life, an arrival at omnicompetence no longer bedeviled by the humiliations of error or miscommunication, the driving of engines of superabundance delivering treasure beyond the dreams of avarice, or offering up digital immortalization of an "info-soul" in better-than-real virtuality may make AI seem so desirable that techno-transcendentalists of the transhumanoid, singularitarian kinds want to pretend we know enough to know how do build it when we do not, but that has nothing to do with science or materialism. Gareth and his futurological friends' attitudes look to be common or garden variety religiosity of the most blatant kind, if I may say so. And even if the faithful wear labcoats rather than priest's vestments, it's not like we can't see it's all still from Party City. 
The human mind is not immune from scientific investigation and understanding, and neither is the brain (the physical implementation of the mind). That should be a fairly uncontroversial viewpoint. I simply go one further and say that human brains are not immune from simulation, and simulating a brain would automatically get you a mind. 
No one has denied that intelligence can be studied and better understood. I do wonder whether Gareth's parenthetic description of the brain as "the physical implementation of the mind" already sets the stage for his desired scene of an interested agent implementing an intelligence when there is actually no reason to assume such a thing where the biologically incarnated mind is concerned. People in robot cults should possibly take care before assuming the air of adjucating just which disputes are scientifically controversial or not, by the way. When he goes on to say "I simply go one [step] further" in turning to the claim that simulating a brain automatically gets you a mind I disagree that there is anything "simple" about that leap, or that it is in any sense a logical elaboration of a similar character to the preceding (as he implies by the word "further"). Not only does simulating a brain not obviously or necessarily "automatically" get you a mind, it quite obviously does not, and necessarily not get you the mind so simulated. To say otherwise is not materialist, but immaterialist -- but worse it is palpably insane. You are not a picture of you, and a picture of brain is not a brain, and a moving picture of a mind's operation in some respects is not the mind's operation. You may be stupid and insensitive enough not to see the difference between a romantic partner and a fuck doll got up to look like that romantic partner, but you should not necessarily expect others to be so dull if you bring your doll to meet the family or hope to elude prosecution for murdering your partner when the police come calling.

PS: In Section Three of Futurological Discourses and Posthuman Terrains I connect such pathologically robocultically extreme, immaterialist ideology as I ridicule here to more prevailing, mainstream neoliberal futurology in which immaterialist ideology plays out in, for example, celebrations or at any rate justifications for fraudulent financialization in global, digital developmentalist corporate-military think-tank discourse.