I think that few of the technologies under consideration or discussion here are even remotely proximate enough for legislators, policy makers, or even gonzo investor types to enter into serious deliberation about them on their own terms, and frankly I think that few of the phenomena (consciousness, intelligence, flourishing, wisdom) into which these techniques are presumably supposed to be intervening are even remotely well understood enough to provide the basis for confident assessments.
That is to say, I am presumably one of those poor benighted souls who are "woefully unaware about realistic technology possibilities" like "smart drugs, genetic selection and engineering, and the use of external devices that affect the brain or the learning process." Although my ignorance is attributed by Mr. Wood to a scientific illiteracy shaped by watching too much science fiction, it occurs to me that, quite to the contrary, hyperbolic claims about total rapid transformations of the human condition through the intervention of fantastically efficacious techniques and devices are in fact to be found more in science fiction than actual science practice or science policy (and let me say that when I refer to science fiction here, I am including an enormous amount of advertising imagery and the promotional discourse one finds Very Serious Futurologists indulging in with PowerPoint presentations in think-tank infused/ enthused conference settings). In this regard, it isn't exactly confidence inspiring to hear a breathless reference to record breaking RSVP's for a talk proffered as a sign of… well, who knows what exactly? Even if Very Serious "transhumanists" like Nick Bostrom manage to get a million facebook "likes" for their pitch this is not, you will forgive me, a reason to think "means to stimulate seemingly paranormal abilities and transcendental experiences" are indeed, as Mr. Wood suggests, "apparent." I must say I do not agree with the article's conclusion that there is any proper connection between indulging in wish fulfillment fantasizing and being "better informed."
Nevertheless, I still think it is important to take articles like this one seriously because they have impacts in altogether different domains than the ones they say they mean to shape. It is crucial to recognize that whenever one speaks about "enhancement," that term is freighted with unstated questions -- enhancement for precisely whom? according to what values? in the service of what end? at the cost of what end?
There simply is no such thing as a neutral "enhancement" that benefits everybody equally without costs, let alone unintended consequences. What is interesting about this sort of discussion is that it pretends all of the stakes are aligned, all the relevant facts are known, all the values are already shared, when of course none of that is the least bit true. "Enhancement" discourse evacuates inextricably political debates of their political substance, inevitably in the service of the implementation of a particular ideology, a particular agenda, a particular constellation of norms (always uninterrogated, often even unconscious). Again, while few of the techniques under discussion here are actually either real or emerging, they function as symptoms of the underlying politics they disavow, but they also function as frames that would refigure and rewrite humanity (in the present, not in "The Future" at all, mind you) in terms more congenial to those underlying politics. That is to say, the apparently technical, apparently neutral, apparently universal, apparently apolitical language of "enhancement" seeks to do political work in the most efficacious imaginable mode, the mode of not doing politics at all.
To get a better sense of what I mean here, notice the exchange of views highlighted in the piece between a critic of this techno-utopian moral-engineering eugenicism, Anne Kerr, and the author. In Mr. Wood's summary of her views Professor Kerr pointed out that "enhancements provided by technology… would likely only benefit a minority of individuals, potentially making existing social inequalities even worse than at present." The upshot of this observation is that it is inapt to use the word "enhancement" in the first place to describe these sorts of little futurological allegories. She presumably went on to illustrate her point with a few imaginary examples: "Imagine what might happen if various clever people could take some pill to make themselves even cleverer? It’s well known that clever people often make poor decisions. Their cleverness allows them to construct beguiling sophistry to justify the actions they already want to take… Or imagine if rapacious bankers could take drugs to boost their workplace stamina and self-serving brainpower -- how much more effective they would become at siphoning off public money to their own pockets!"
Hearing Kerr's concerns, Mr. Wood declares he felt "bound" to respond:
would you be in favour of the following examples of human enhancement, assuming they worked? An enhancement that made bankers more socially attuned, with more empathy, and more likely to use their personal wealth in support of philanthropic projects? An enhancement that made policy makers less parochial, less politically driven, and more able to consider longer-term implications in an objective manner? And an enhancement that made clever people less likely to be blind to their own personal cognitive biases, and more likely to genuinely consider counters to their views? In short, would you support enhancements that would make people wiser as well as smarter, and kinder as well as stronger?Of course, to assume in advance that such "enhancements" worked is precisely the issue under discussion so it seems a rather flabbergasting concession to demand in advance, but for me the greater difficulty is the way such a discussion has already been framed by Mr. Wood's response as one in which what we mean when we say a device is "working" is the relevant vocabulary to deploy when what we are discussing is moral development or political reconciliation or human flourishing. In pointing out that clever people often behave foolishly, part of what Kerr is calling into question is whether or not we are quite right to value clever people as clever or right to pretend we mean the same things when we speak of cleverness at all. Mr. Wood seems in his cleverness to have missed that point, predictably enough. Why should readers concede, as his response to Kerr demands we do, that we all know and share a sense of what he means when he speaks of a banker being enhanced into "social attunement"? How does one square enhancement with attunement even in principle? Attunement to what, when, how long, how often, exactly? Would it be right to describe as "philanthropic" a person re-engineered to reflect some person's idiosyncratic image of what a philanthropist acts like? Was Kerr even bemoaning a lack of philanthropy when she expressed worries about the recklessness and fraudulence of too many bankers? Who is to say in advance what the relevant "cognitive biases" are that frustrate good outcomes? Aren't both the biases and goods in question here at least partially a matter of personal perspective, a matter of personal preference? Why is it assumed that parochialism always favors the shorter term over the longer-term? When Keynes reminded us that "in the long run we are all dead" he was not recommending short term thinking in general, but pointing out that sometimes avoidable massive suffering in the short term demands risks (stimulative public deficit spending) that long-term prudence would otherwise disdain.
Wisdom is a tricky business -- if I may condense several thousand years of literature into a chestnut -- and it scarcely seems sensible to fling questions around like "would you support enhancements that would make people wiser as well as smarter, and kinder, as well as stronger?" when there are so many vital questions at the heart of what we mean when we speak of wisdom, smartness, kindness, strength in the first place. Not to put too fine a point on it, it seems to me that whatever the answers to the questions Mr. Wood is posing here, everybody engaging in this conversation on these terms looks to me to be made rather more dumb than I think we need be. Is that what Mr. Wood means by "working"?
Unsurprisingly, Professor Kerr apparently responded to Mr. Wood's challenge by rejecting it, and proposing instead that we focus on processes of education and political democratization. Wood countered by complaining, "These other methods don’t seem to be working well enough." He writes that he wishes he had elaborated the example of the failure of our political processes to be equal to environmental problems as an example of what he means -- I suspect Mr. Woods would also be a booster for "geo-engineering" then, angels and ministers of grace defend us! Of course, I wonder what it might mean to say democracy isn't "working," exactly? Does that mean the outcomes Mr. Wood would prefer have not yet prevailed? Does it mean he thinks desirable outcomes in failing now, must then always fail? If he wants to circumvent these failed processes with "technology," does he discount the political processes through which "technology" ends up being funded, regulated, implemented, maintained, its effects distributed and understood, and so on?
When Mr. Wood glowingly quotes Julian Savulescu and Ingmar Persson about how we are "unfit" for "The Future," and how "there are few cogent philosophical or moral objections to… moral bioenhancement. In fact, the risks we face are so serious that it is imperative we explore every possibility… We simply can’t afford to miss opportunities…" I find myself wondering just who this "we" he and they are talking about consists of. Who is included in, and excluded from, this "we"? Who is deciding what a "cogent objection" to this line of what looks to me like incoherent hyperbolic bs consists of? Who is deciding what "opportunities" can't be missed by whom? Whose pet vision of "The Future" exactly are you talking about here? Given that the democratic "we" has already been bagged for disposal in this chirpy little number, I think the answers to these questions take on a certain urgency.