Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All
Sunday, September 23, 2007
No Limits! (And Other Foolishness)
I always cringe whenever some "inspirational" business huckster barks into the mic or at the dinner table about how "innovation," "production," "enterprise" or whatever slogan the Amway mindset has latched onto at the moment has "no limits, man!" so, you know, we should all presumably become dot-eyed marauding maniacs and "go for it!" and "take no prisoners!" or what have you.
Such attitudes seem to me always to depend on, or even simply straightforwardly to translate to, the ugly awful fervently held faith that there will always be someone else around to clean up our messes for us.
This expressed disdain for the very idea of limits always amounts to and sometimes baldly testifies to a disdain for the actually-existing living plurality of people who are superfluous to or at odds with the streamlined trajectory at which our stubbornly insensitive, er, "motivated," go-getter imagines he is getting himself to.
Nowhere is this disdain more conspicuous to me than among Superlative Technocentrics, nearly all of them caught up in a frenzy of self-promotion, self-selection, and delusive mutual reinforcement as they handwave from the palpable and urgent reality of ongoing and emerging disruptive technoscientific change to what is instead an essentially irrational and certainly pseudoscientific transcendentalizing talk of omni-predicated technologies delivering post-bodily "immortality," post-embodied "consciousness," post-economic "abundance," post-historical "singularity," and so on.
While I am quite conscious of the ways in which the overwhelming inputs from planetary networked media, the biomedical intervention into customary understandings about when lives can properly be said to begin and end and with what expectations about capacities and changing function we might properly invest them, the intervention of instrumental rationality into production at incomparably small and large scales, the unprecedented appearance on the scene of weapons of massive and insane destructiveness and so on, are all deranging our collective sense of the limits that hitherto have been taken to define the human condition. That this sudden, intensive, and extensive transformative technodevelopmental storm-surge has been treated by some as a portent of an upcoming overcoming of human finitude as such, as the looming confrontation with a techno-transcendentalizing overcoming of the very idea of limits altogether has always seemed to me a curious confusion. And given the frequent coloration of such claims by fairly conventional theological notions of omni-potence, omni-science, and omni-benevolence this curious confusion seems to me all the more curiously conventionally religious, especially considering the barking militant anti-religiosity of so many who seem to indulge in this sort of handwaving technology-talk in the first place. To me, it has always seemed more sensible to say that the technodevelopmental derangement of customary limits is experienced quite as much as the emergence of a new limit -- the loss of our ability to claim with the sort of confidence we've sometimes depended on just what our definitive limits actually will consist of in matters of political and ethical concern that perplex us -- as it is the more emancipatory overcoming of certain old limits.
Be that as it may, I'll admit that it is easiest to focus one's critical attentions on the flabbergasting practical naivete of Superlative technodevelopmental accounts that rely on loose analogies (there are, to be blunt, differences that make a difference between human brains and computers, biological organisms and nanofactories, aging bodies and well-maintained mansions, stakeholder deliberation and the unilateral implementation of optimal outcomes deduced from ideal formulations), accounts that overestimate the state of our knowledge of the relevant technoscience, accounts that overemphasize the smooth function of technology in general, accounts that underestimate the role of social, cultural, and political factors on the vicissitudes of technoscientific change and its impacts, and accounts that treat complex dynamisms as linear processes and complex phenomena as simple monoliths.
It is also easy to focus on the, shall we say, symptomatic dimensions of Superlative Technology discourses, with their bevy of boastful boys, with their curiously conspicuous comic book iconography, with their eager self-marginalizing subcultural politics (hence the incessant vulnerability to and defensiveness about charges of weird Robot Cultism), with their non-negligible exhibition of body-loathing (from their occasional expressions of old-school Cyberpunk disdain of the "meat-body" to their widespread ongoing incomprehension of disability activists who quite righteously insist, "nothing about us without us"), with their ongoing difficulty in nudging their demographic much beyond its conspicuous -- tho' admittedly not exclusive -- white maleness in a world in which whites and males are minorities otherwise, with the lingering presence of "market fundamentalist" intellectuals among them and being taken seriously by them as they are almost nowhere else (after all, the neoliberal and neoconservative policies which gave "anarcho-capitalist" and "free-market" abstractions the only actual life they ever had or will ever have were undertaken by incumbent interests with the cynical understanding that these "ideas" provide ideal cover for confiscatory wealth concentration, but there are few actually intelligent people who still believe in these market fundamentalist pieties on their face, if anybody ever did, apart perhaps from a few awkward earnest Randians, poor things).
But the actual focus of my own critique of Superlative Technology discourses (even if I'll admit I have often directed my jeremiads against these more conspicuously vulnerable dimensions of Superlativity) is on their pernicious anti-politicizing and, more specifically, their almost always anti-democratizing force. Needless to say, I do indeed think the highly fetishized, irrationally hyperbolized, faithfully transcendentalized, falsely monolithicized, obsessively singularized technodevelopmental outcomes that preoccupy Superlative Technocentrics are the farthest thing from plausible in their specific Superlative formulations. But even if I were to grant them more than the negligible plausibility of logical possibility (which is quite enough for most Superlative Technocentrics, and I'll let the reader puzzle through the implications of that low bar given the force of True Belief it seems so often to underwrite), the fact remains that I still do not agree that Superlativity provides the best discursive lens through which we would best cope with the extraordinarily sweeping implications typically attributed to these outcomes by the Superlative Technocentrics themselves.
(A side note: Exactly in analogy to the "New Normal" of contemporary terror-alerts, the attribution of such sweeping implications to what amount at best to thought-experiments and at worst to science-fictional vignettes -- only without the accompanying pleasure of narrative or characterization -- is precisely what functions to make the Superlative demand for a substitution of a focus on proximate for projected and idealized outcomes the hallmark of "seriousness" by their lights, contrary to all sense and in fact in a way that is immune to the interventions of common-sense, strictly speaking.)
If the Superlative Technocentrics were actually right to imagine that billions of people now living will find themselves all too soon living in a future transformed by Friendly or Unfriendly post-biological intelligences, nanotechnological superabundance, biomedical immortality, or the like (and I do think they are far more likely to be wrong than right and I think this matters enormously), even granting them this, I think they are profoundly wrong to imagine that our best way to facilitate the best, least violent, most fair (or whatever) versions of these Superlative outcomes is to contemplate and prepare for the Superlative outcomes themselves, in the abstract, as these outcomes suggest themselves to us in our own impoverished vantage (an impoverishment exacerbated all the more by marginal and anti-democratic modes of Superlative deliberation). Such contemplation and preparation circumvents the ongoing and plural stakeholder contestation that will certainly articulate the unpredictable developmental forces and the dynamic developmental pathways along which such outcomes would actually "arrive" (were they to do so), ignores the practical, scientific, technical, pedagogical, regulatory, cultural knowledges arising out of our collective day to day responsiveness, competition, and collaboration in the plural presents from which no less plural futures will present themselves, that will not only shape but actively constitute our foresight and provide the living archive to which future generations or the communities in which we will ourselves later belong will make our collective recourse as we struggle to cope with these outcomes and their alternatives.
To be sure, this is not the denigration of foresight as such that Superlative Technocentrics will be sure to accuse it of being, but simply an insistence that foresight properly emerges from the ongoing contestation and deliberation of the plurality of actually-existing stakeholders to the emerging technodevelopmental terrain rather than from an idealized projection of Superlative outcomes onto the future by the impoverished perspective of a marginal minority and from the impoverished position of that future's past. This means that serious futurists (Jamais Cascio provides a well-respected example here) would always propose multiple technodevelopmental outcomes in their proposals, no one of which solicits identification but all of which, taken together, capture the texture of an upcoming technodevelopmental terrain in its dense plurality. And so, too, serious futurists would always stress the contingency, non-autonomy, and diversity of the impacts of technodevelopmental outcomes from the perspective of the plurality of their stakeholders. Serious futurists, finally, should always understand and emphasize that the rationality of foresight is more inductive than deductive; and, to the extent that such futurism would be democratizing rather than merely profitable for incumbent interests (and, hence, strictly speaking, better described as retro-futurism), futurists must grasp that the pragmatic point that deliberative foresight foregrounds induction over deduction translates in political terms to a foregrounding of openness over optimality.
(Regular readers may be surprised to see me talk about the very possibility of a "serious futurological practice" given all the abuse I tend to heap on self-identified futurists here... but the simple truth is that it seems to me there are good reasons to think that futurism, so-called, might very well manage for another generation or so -- as psychoanalysis managed to do for well over half the twentieth century -- to remain one of the few places where something like actual philosophical thinking might take place in a way that will be taken seriously even by anti-intellectual Americans. That is more than enough to get me to pay serious attention to it.)
Again, the simple truth is that I think that the preoccupations of no small amount of Superlative Technology Discourse is symptomatic rather than serious. As often as not, it symptomizes (as does so much literary sf, much more provocatively) the fears and fantasies of people caught up in disruptive technoscientific change, it symptomizes (as does so much neoliberal discourse, which remains complementary and often still explicitly correlated to technocratic discourses generally and Superlative Technology discourses particularly) the social, subcultural, and political marginality of many of the personalities drawn to these discourses.
But if the outcomes the Superlative Technocentrics have battened on to really were to come about in some form, the facilitation of best, safest, fairest, most democratic versions of these outcomes will arrive from ongoing plural stakeholder discourse rather than from the unilateral implementations of elite and abstract discourse. That is why my own technoprogressive politics (which is no less technocentric than that of the Superlative Technology discourses when all is said and done) would direct its energies to securing, subsidizing, and celebrating peer-to-peer formations of technoscientific practice, education, regulation, funding, and of p2p education, agitation, and organizing for radical democracy (including the democratization of the planetary economy) in general as a more practical technodevelopmental politics -- more practical even in the event that technodevelopmental outcomes come to assume anything like the contours that preoccupy the imaginations of Superlative Technocentrics.
If Singularitarians, so-called, really are as worried about scary Robot Gods as they seem to be, then it seems to me a far more practical focus for their attention and action would be to participate in contemporary anti-militarist and anti-globalization movements to diminish the role of the secretive and hierarchical command formations in the midst of our democratic society and to overturn the legal fiction of corporate personhood with all its pernicious antisocial and antienvironmental implications -- which are the locations in society out of which anything remotely resembling the Superlative fears and fantasies of these Singularitarians are likeliest to emerge. Otherwise, the ongoing regulation and monitoring of already existing and actually emerging malware seems to me incomparably more likely to provide the practical resources to which we would make collective recourse were we eventually confronted with recursively self-improving software, whether rightly taken to be intelligent or entitative or not, rather than whatever our own abstract fancies might now offer up to those -- including, as likely as not, some of us -- who inhabit days to come (between now and which there would be, after all, many intervening days filled with people quite as intelligent as we are, but incomparably better informed, and directing themselves to these actually urgent problems according to the terms in which they actually occur, likewise coping with ongoing and emerging malware and so on, peer-to-peer).
If Nanosantalogists really want nanofactories to incubate a high-tech gift society without reducing the planet to goo, then it seems to me a far more practical focus for their attention would be to participate in the contemporary copyfight and access-to-knowledge movements that would keep the nanofactory instructions out of the hands of incumbent elites, and to participate (as it seems to me my friends at the Center for Responsible Nanotechnology already often do, at least when they are at their best) in movements to empower planetary regulation and oversight of pandemics, tsunamis, climate change, weapons proliferation, the manufacture and trafficking in toxic substances, and so on, since it will be the experiences and insights we acquire in these fraught and urgent already ongoing efforts that will provide the real archive on which we would really, truly depend were we to find ourselves confronting the Superlative fears and fantasies of these Nanosantalogists.
If Technological Immortalists, so-called, really want to inspire and fund and implement a SENS program to overcome the suffering and pathologies we customarily associate with human aging, then it seems to me a far more practical focus for their attention would be to embrace the rhetoric of the Longevity Dividend, to refigure what deGrey describes as the Seven Deadly Things (or whatever number this eventually amounts to, a habit of qualification and caveat being a welcome thing from especially speculative scientists) as seven separate medical conditions among countless others likewise demanding elaborate foundations and diverse research teams, and, above all else, to refrain altogether from idiotic talk of "living forever" or "immortality" in the first place (given the admission by most Technological Immortalists that theirs is not a program that would elude disease, violent, or accidental death even if it managed to achieve its already implausibly Superlative ends, it is curious -- that is to say, importantly symptomatic -- that they should be so reluctant to eschew these essentially faithful rather than factual discourses). But more to the point, it seems to me that enthusiasts for longevity and rejuvenation medicine should be devoting considerable efforts to movements to secure universal healthcare, to address neglected diseases among the planetary precariat, to provide clean drinking water and basic healthcare to everybody on earth, to defend the informed nonduressed consensual recourse to wanted therapies (whether normalizing or not) and protection from unwanted therapies (whether normalizing or not) in the context of contemporary modification medicine, to end the so-called war on (some) drugs (together with the fraudulent marketing and mandated use of other drugs) wherever its racist anti-democratizing tentacles reach, and so on.
Superlative Technocentrics are likely to recoil from suggestions like these, dismissing them as stealthy, well-nigh "closeted" half measures, but the truth, I'm afraid, is that their own monological fixations and hyperbolic derangements of these sensible -- even urgent -- recommendations bespeak either a profound misunderstanding of the complex, dynamic, ineradicably politicized technodevelopmental terrain as it actually exists, or a derangement that symptomizes their own irrational passions, born of social marginalization, short-sighted greed or hostility, neurotic fears of contemporary change and lack of personal control, straightforward narcissistic personality disorder, or the like.
It is well known (that is to say, known among the few odd people like me who keep up with this sort of thing at all) that I advocate what I describe as a technoprogressive political viewpoint which regards ongoing and emerging technoscientific change as at once the most dangerous and most promising field of contemporary democratic-left and emancipatory politics. For me, "progress" has come to be a matter of technodevelopmental social struggle first of all, the contestation of a plurality of stakeholders to the ongoing articulation and distribution of technoscientific costs, risks, and benefits. It is from this perspective in particular that I understand the urgent struggles against neoliberal corporate-militarism, environmentalist movements, planetary human rights and social justice movements, and so on.
While this perspective is no less technocentric than that of the Superlative Technocentrics I so regularly critique our differences could not be more stark otherwise (but this one point of continuity is enough to keep me on my toes lest my own technoprogressivity drift here and there into a problematic Superlativity quite in spite of myself). Not only do I think that the best, most democratizing, most emancipatory technodevelopmental outcomes can be facilitated by politics that are perfectly intelligible to the democratic-left progressive mainstream imagination (as Superlative Technocentricity very definitely is not), but I also think that there is an emerging technoprogressive mainstream on the American political scene and elsewhere around the world that is conjoining the forces of the left blogosphere and Netroots and other emerging p2p democratic formations, the defense of consensus science, copyfight, free press and open media, access to knowledge (a2k) movements, commitments to a politics of choice that encompasses both abortion and ARTs and consensual drug policy, growing demands for renewable energy and sustainable production, and other strands of the contemporary technoscientific tapestry, from the ground up, peer-to-peer, all around us, right here, right now.
Why Superlative Technocentrics would prefer their far-flung and hyperbolized futures over these actually-existing popular technoprogressive energies is entirely beyond me. No doubt only their therapists (or possibly, for a few of them at least, their financial advisers) know for sure.
Such attitudes seem to me always to depend on, or even simply straightforwardly to translate to, the ugly awful fervently held faith that there will always be someone else around to clean up our messes for us.
This expressed disdain for the very idea of limits always amounts to and sometimes baldly testifies to a disdain for the actually-existing living plurality of people who are superfluous to or at odds with the streamlined trajectory at which our stubbornly insensitive, er, "motivated," go-getter imagines he is getting himself to.
Nowhere is this disdain more conspicuous to me than among Superlative Technocentrics, nearly all of them caught up in a frenzy of self-promotion, self-selection, and delusive mutual reinforcement as they handwave from the palpable and urgent reality of ongoing and emerging disruptive technoscientific change to what is instead an essentially irrational and certainly pseudoscientific transcendentalizing talk of omni-predicated technologies delivering post-bodily "immortality," post-embodied "consciousness," post-economic "abundance," post-historical "singularity," and so on.
While I am quite conscious of the ways in which the overwhelming inputs from planetary networked media, the biomedical intervention into customary understandings about when lives can properly be said to begin and end and with what expectations about capacities and changing function we might properly invest them, the intervention of instrumental rationality into production at incomparably small and large scales, the unprecedented appearance on the scene of weapons of massive and insane destructiveness and so on, are all deranging our collective sense of the limits that hitherto have been taken to define the human condition. That this sudden, intensive, and extensive transformative technodevelopmental storm-surge has been treated by some as a portent of an upcoming overcoming of human finitude as such, as the looming confrontation with a techno-transcendentalizing overcoming of the very idea of limits altogether has always seemed to me a curious confusion. And given the frequent coloration of such claims by fairly conventional theological notions of omni-potence, omni-science, and omni-benevolence this curious confusion seems to me all the more curiously conventionally religious, especially considering the barking militant anti-religiosity of so many who seem to indulge in this sort of handwaving technology-talk in the first place. To me, it has always seemed more sensible to say that the technodevelopmental derangement of customary limits is experienced quite as much as the emergence of a new limit -- the loss of our ability to claim with the sort of confidence we've sometimes depended on just what our definitive limits actually will consist of in matters of political and ethical concern that perplex us -- as it is the more emancipatory overcoming of certain old limits.
Be that as it may, I'll admit that it is easiest to focus one's critical attentions on the flabbergasting practical naivete of Superlative technodevelopmental accounts that rely on loose analogies (there are, to be blunt, differences that make a difference between human brains and computers, biological organisms and nanofactories, aging bodies and well-maintained mansions, stakeholder deliberation and the unilateral implementation of optimal outcomes deduced from ideal formulations), accounts that overestimate the state of our knowledge of the relevant technoscience, accounts that overemphasize the smooth function of technology in general, accounts that underestimate the role of social, cultural, and political factors on the vicissitudes of technoscientific change and its impacts, and accounts that treat complex dynamisms as linear processes and complex phenomena as simple monoliths.
It is also easy to focus on the, shall we say, symptomatic dimensions of Superlative Technology discourses, with their bevy of boastful boys, with their curiously conspicuous comic book iconography, with their eager self-marginalizing subcultural politics (hence the incessant vulnerability to and defensiveness about charges of weird Robot Cultism), with their non-negligible exhibition of body-loathing (from their occasional expressions of old-school Cyberpunk disdain of the "meat-body" to their widespread ongoing incomprehension of disability activists who quite righteously insist, "nothing about us without us"), with their ongoing difficulty in nudging their demographic much beyond its conspicuous -- tho' admittedly not exclusive -- white maleness in a world in which whites and males are minorities otherwise, with the lingering presence of "market fundamentalist" intellectuals among them and being taken seriously by them as they are almost nowhere else (after all, the neoliberal and neoconservative policies which gave "anarcho-capitalist" and "free-market" abstractions the only actual life they ever had or will ever have were undertaken by incumbent interests with the cynical understanding that these "ideas" provide ideal cover for confiscatory wealth concentration, but there are few actually intelligent people who still believe in these market fundamentalist pieties on their face, if anybody ever did, apart perhaps from a few awkward earnest Randians, poor things).
But the actual focus of my own critique of Superlative Technology discourses (even if I'll admit I have often directed my jeremiads against these more conspicuously vulnerable dimensions of Superlativity) is on their pernicious anti-politicizing and, more specifically, their almost always anti-democratizing force. Needless to say, I do indeed think the highly fetishized, irrationally hyperbolized, faithfully transcendentalized, falsely monolithicized, obsessively singularized technodevelopmental outcomes that preoccupy Superlative Technocentrics are the farthest thing from plausible in their specific Superlative formulations. But even if I were to grant them more than the negligible plausibility of logical possibility (which is quite enough for most Superlative Technocentrics, and I'll let the reader puzzle through the implications of that low bar given the force of True Belief it seems so often to underwrite), the fact remains that I still do not agree that Superlativity provides the best discursive lens through which we would best cope with the extraordinarily sweeping implications typically attributed to these outcomes by the Superlative Technocentrics themselves.
(A side note: Exactly in analogy to the "New Normal" of contemporary terror-alerts, the attribution of such sweeping implications to what amount at best to thought-experiments and at worst to science-fictional vignettes -- only without the accompanying pleasure of narrative or characterization -- is precisely what functions to make the Superlative demand for a substitution of a focus on proximate for projected and idealized outcomes the hallmark of "seriousness" by their lights, contrary to all sense and in fact in a way that is immune to the interventions of common-sense, strictly speaking.)
If the Superlative Technocentrics were actually right to imagine that billions of people now living will find themselves all too soon living in a future transformed by Friendly or Unfriendly post-biological intelligences, nanotechnological superabundance, biomedical immortality, or the like (and I do think they are far more likely to be wrong than right and I think this matters enormously), even granting them this, I think they are profoundly wrong to imagine that our best way to facilitate the best, least violent, most fair (or whatever) versions of these Superlative outcomes is to contemplate and prepare for the Superlative outcomes themselves, in the abstract, as these outcomes suggest themselves to us in our own impoverished vantage (an impoverishment exacerbated all the more by marginal and anti-democratic modes of Superlative deliberation). Such contemplation and preparation circumvents the ongoing and plural stakeholder contestation that will certainly articulate the unpredictable developmental forces and the dynamic developmental pathways along which such outcomes would actually "arrive" (were they to do so), ignores the practical, scientific, technical, pedagogical, regulatory, cultural knowledges arising out of our collective day to day responsiveness, competition, and collaboration in the plural presents from which no less plural futures will present themselves, that will not only shape but actively constitute our foresight and provide the living archive to which future generations or the communities in which we will ourselves later belong will make our collective recourse as we struggle to cope with these outcomes and their alternatives.
To be sure, this is not the denigration of foresight as such that Superlative Technocentrics will be sure to accuse it of being, but simply an insistence that foresight properly emerges from the ongoing contestation and deliberation of the plurality of actually-existing stakeholders to the emerging technodevelopmental terrain rather than from an idealized projection of Superlative outcomes onto the future by the impoverished perspective of a marginal minority and from the impoverished position of that future's past. This means that serious futurists (Jamais Cascio provides a well-respected example here) would always propose multiple technodevelopmental outcomes in their proposals, no one of which solicits identification but all of which, taken together, capture the texture of an upcoming technodevelopmental terrain in its dense plurality. And so, too, serious futurists would always stress the contingency, non-autonomy, and diversity of the impacts of technodevelopmental outcomes from the perspective of the plurality of their stakeholders. Serious futurists, finally, should always understand and emphasize that the rationality of foresight is more inductive than deductive; and, to the extent that such futurism would be democratizing rather than merely profitable for incumbent interests (and, hence, strictly speaking, better described as retro-futurism), futurists must grasp that the pragmatic point that deliberative foresight foregrounds induction over deduction translates in political terms to a foregrounding of openness over optimality.
(Regular readers may be surprised to see me talk about the very possibility of a "serious futurological practice" given all the abuse I tend to heap on self-identified futurists here... but the simple truth is that it seems to me there are good reasons to think that futurism, so-called, might very well manage for another generation or so -- as psychoanalysis managed to do for well over half the twentieth century -- to remain one of the few places where something like actual philosophical thinking might take place in a way that will be taken seriously even by anti-intellectual Americans. That is more than enough to get me to pay serious attention to it.)
Again, the simple truth is that I think that the preoccupations of no small amount of Superlative Technology Discourse is symptomatic rather than serious. As often as not, it symptomizes (as does so much literary sf, much more provocatively) the fears and fantasies of people caught up in disruptive technoscientific change, it symptomizes (as does so much neoliberal discourse, which remains complementary and often still explicitly correlated to technocratic discourses generally and Superlative Technology discourses particularly) the social, subcultural, and political marginality of many of the personalities drawn to these discourses.
But if the outcomes the Superlative Technocentrics have battened on to really were to come about in some form, the facilitation of best, safest, fairest, most democratic versions of these outcomes will arrive from ongoing plural stakeholder discourse rather than from the unilateral implementations of elite and abstract discourse. That is why my own technoprogressive politics (which is no less technocentric than that of the Superlative Technology discourses when all is said and done) would direct its energies to securing, subsidizing, and celebrating peer-to-peer formations of technoscientific practice, education, regulation, funding, and of p2p education, agitation, and organizing for radical democracy (including the democratization of the planetary economy) in general as a more practical technodevelopmental politics -- more practical even in the event that technodevelopmental outcomes come to assume anything like the contours that preoccupy the imaginations of Superlative Technocentrics.
If Singularitarians, so-called, really are as worried about scary Robot Gods as they seem to be, then it seems to me a far more practical focus for their attention and action would be to participate in contemporary anti-militarist and anti-globalization movements to diminish the role of the secretive and hierarchical command formations in the midst of our democratic society and to overturn the legal fiction of corporate personhood with all its pernicious antisocial and antienvironmental implications -- which are the locations in society out of which anything remotely resembling the Superlative fears and fantasies of these Singularitarians are likeliest to emerge. Otherwise, the ongoing regulation and monitoring of already existing and actually emerging malware seems to me incomparably more likely to provide the practical resources to which we would make collective recourse were we eventually confronted with recursively self-improving software, whether rightly taken to be intelligent or entitative or not, rather than whatever our own abstract fancies might now offer up to those -- including, as likely as not, some of us -- who inhabit days to come (between now and which there would be, after all, many intervening days filled with people quite as intelligent as we are, but incomparably better informed, and directing themselves to these actually urgent problems according to the terms in which they actually occur, likewise coping with ongoing and emerging malware and so on, peer-to-peer).
If Nanosantalogists really want nanofactories to incubate a high-tech gift society without reducing the planet to goo, then it seems to me a far more practical focus for their attention would be to participate in the contemporary copyfight and access-to-knowledge movements that would keep the nanofactory instructions out of the hands of incumbent elites, and to participate (as it seems to me my friends at the Center for Responsible Nanotechnology already often do, at least when they are at their best) in movements to empower planetary regulation and oversight of pandemics, tsunamis, climate change, weapons proliferation, the manufacture and trafficking in toxic substances, and so on, since it will be the experiences and insights we acquire in these fraught and urgent already ongoing efforts that will provide the real archive on which we would really, truly depend were we to find ourselves confronting the Superlative fears and fantasies of these Nanosantalogists.
If Technological Immortalists, so-called, really want to inspire and fund and implement a SENS program to overcome the suffering and pathologies we customarily associate with human aging, then it seems to me a far more practical focus for their attention would be to embrace the rhetoric of the Longevity Dividend, to refigure what deGrey describes as the Seven Deadly Things (or whatever number this eventually amounts to, a habit of qualification and caveat being a welcome thing from especially speculative scientists) as seven separate medical conditions among countless others likewise demanding elaborate foundations and diverse research teams, and, above all else, to refrain altogether from idiotic talk of "living forever" or "immortality" in the first place (given the admission by most Technological Immortalists that theirs is not a program that would elude disease, violent, or accidental death even if it managed to achieve its already implausibly Superlative ends, it is curious -- that is to say, importantly symptomatic -- that they should be so reluctant to eschew these essentially faithful rather than factual discourses). But more to the point, it seems to me that enthusiasts for longevity and rejuvenation medicine should be devoting considerable efforts to movements to secure universal healthcare, to address neglected diseases among the planetary precariat, to provide clean drinking water and basic healthcare to everybody on earth, to defend the informed nonduressed consensual recourse to wanted therapies (whether normalizing or not) and protection from unwanted therapies (whether normalizing or not) in the context of contemporary modification medicine, to end the so-called war on (some) drugs (together with the fraudulent marketing and mandated use of other drugs) wherever its racist anti-democratizing tentacles reach, and so on.
Superlative Technocentrics are likely to recoil from suggestions like these, dismissing them as stealthy, well-nigh "closeted" half measures, but the truth, I'm afraid, is that their own monological fixations and hyperbolic derangements of these sensible -- even urgent -- recommendations bespeak either a profound misunderstanding of the complex, dynamic, ineradicably politicized technodevelopmental terrain as it actually exists, or a derangement that symptomizes their own irrational passions, born of social marginalization, short-sighted greed or hostility, neurotic fears of contemporary change and lack of personal control, straightforward narcissistic personality disorder, or the like.
It is well known (that is to say, known among the few odd people like me who keep up with this sort of thing at all) that I advocate what I describe as a technoprogressive political viewpoint which regards ongoing and emerging technoscientific change as at once the most dangerous and most promising field of contemporary democratic-left and emancipatory politics. For me, "progress" has come to be a matter of technodevelopmental social struggle first of all, the contestation of a plurality of stakeholders to the ongoing articulation and distribution of technoscientific costs, risks, and benefits. It is from this perspective in particular that I understand the urgent struggles against neoliberal corporate-militarism, environmentalist movements, planetary human rights and social justice movements, and so on.
While this perspective is no less technocentric than that of the Superlative Technocentrics I so regularly critique our differences could not be more stark otherwise (but this one point of continuity is enough to keep me on my toes lest my own technoprogressivity drift here and there into a problematic Superlativity quite in spite of myself). Not only do I think that the best, most democratizing, most emancipatory technodevelopmental outcomes can be facilitated by politics that are perfectly intelligible to the democratic-left progressive mainstream imagination (as Superlative Technocentricity very definitely is not), but I also think that there is an emerging technoprogressive mainstream on the American political scene and elsewhere around the world that is conjoining the forces of the left blogosphere and Netroots and other emerging p2p democratic formations, the defense of consensus science, copyfight, free press and open media, access to knowledge (a2k) movements, commitments to a politics of choice that encompasses both abortion and ARTs and consensual drug policy, growing demands for renewable energy and sustainable production, and other strands of the contemporary technoscientific tapestry, from the ground up, peer-to-peer, all around us, right here, right now.
Why Superlative Technocentrics would prefer their far-flung and hyperbolized futures over these actually-existing popular technoprogressive energies is entirely beyond me. No doubt only their therapists (or possibly, for a few of them at least, their financial advisers) know for sure.
Subscribe to:
Post Comments (Atom)
16 comments:
"Why Superlative Technocentrics would prefer their far-flung and hyperbolized futures over these actually-existing popular technoprogressive energies is entirely beyond me."
http://query.nytimes.com/gst/fullpage.html?res=9D00EFDA1F30F931A35752C0A9639C8B63
Dale,
What do you think about Peter Singer and utilitarianism? Why not allocate efforts between projects on the basis of the expected marginal benefit of your contribution? Figure out whether you expect that your personal effort will do more good if you work as a physician and donate most of your income to antiwar organizations, or serve as a researcher in a malaria vaccine development program, or try to work on preparing and developing solutions for problems that have not yet become "actually existing" disasters (e.g. asteroids, catastrophic positive-feedback global warming, high-lethality engineered pandemics, or AI). Then act on those conclusions.
http://query.nytimes.com/gst/fullpage.html?res=9D00EFDA1F30F931A35752C0A9639C8B63
There's a lot to admire in Peter Singer's work -- my personal favorite so far has been his One World.
I do take very seriously the distinction of is from ought, however, and am unhappy about the reductionist rhetorics that often accompany some versions of "utilitarian" and "consequentialist" discourses (though not necessarily all of those which are inspired by Singer in particular).
And it is also crucial to remember that when one is talking about "allocation[s of] effort," that the meta-principles to which we perfectly properly might make recourse in our personal ethics are not always appropriate guides to allocations of public resources in defiance of expressed public interests.
I don't agree, by the way, that "AI" is an "actually-existing" disaster in anything remotely like the sense of the "global warming" or "asteroids" that accompany it in that parenthesis. It is in my view a derangement of sense facilitated by Superlativity to think otherwise -- which is a fine reason to critique it.
Among many others, in my view, as you can see. YMMV, of course.
Your position and recommendations boil down to a socialist position. It is a position which does not look at the economics of problems. Some problems have a higher return on lower cost.
an example : you state. Instead of trying to advance SENS to push the limits of life extension, those who support it should promote universal healthcare etc... Those other things are not bad goals but are far more costly.
SENS can be advanced in the rounding error of those other health problems. Healthcare for those who are not covered under US private insurance already has hundreds of billions spent each year. SENS is making progress using a few million dollars. A fully funded SENS program might be $1 billion/year versus the $2 trillion healthcare system.
Certain approaches to achieving goals are just resource inefficient. $2 trillion/year is spent on healthcare and the system is still woefully inadequate, then maybe the solution is not to spend more money on the current system.
Andy grove has suggested modifying the system to extend health services.
http://www.wired.com/medtech/health/news/2007/04/andygrove_healthcare_qanda
Grove breaks the problem of health care into three manageable chunks. Two have technological solutions -- but not complex tech. Grove wants to keep the technology as simple as possible, a surprising idea for a man who put millions of transistors on a chip.
First: Keep elderly people at home as long as possible (an idea he calls "shift left"). Use high-tech gadgets to help them remember to take their medicine and monitor their health. In one year, if a quarter of the people now living in nursing homes went home, it would save more than $12 billion, Grove says.
Second, Grove advocates addressing the uninsured by building more "retail clinics" -- basic health care centers in drugstores and other outlets that can take care of problems that are presently, and expensively, addressed in emergency rooms.
Lastly, unify medical records using the internet. In his vision, every patient carries a USB drive containing his or her medical records, which any doctor can download.
Society can achieve more with technology and systems by spending smarter and adjusting how things are done.
Dale, you are railing against any superlative or outliers. You have your favorite solutions to problems, which already have a lot of people who have been working decades on those problems. These appoaches have worked poorly in the past. Things like arms control have had minimal effectiveness. You are asking for the 0.1% or less of people and resources to be diverted from trying approaches and solutions that are completely different for a surge into those old approaches.
1. I fail to see how diverting the people and resources from the small programs that you dislikes will meaningfully help the bloated and old ones that he likes. I fail to see how trying to get a few thousand people to support those programs will tip the political balance either.
2. If these (AI, Advanced Nanotech, life extension/SENS) approaches are wrong and doomed to fail, then why does it offend you so much ? These industries are smaller then the TV, movie and other entertainment industries. They are smaller than the military industrial complex. Smaller than tobacco and alcohol. There are deeper pockets to go after and more destructive industries to divert. These industries are smaller and more benign than a lot of other segments of the economy.
Your position and recommendations boil down to a socialist position.
I think my position would be better described as peer-to-peer democracy.
It is a position which does not look at the economics of problems. Some problems have a higher return on lower cost.
This is not true, although I will concede that I do not foreground economic issues in this particular post -- I'm talking more about rhetorical dimensions of a discourse, subcultural politics of a programmatic perspective. I do recommend the strategy one finds summarized as the Longevity Dividend, which is very much a policy discourse emphasizing social cost/benefit trade-offs of the kind that you are talking about here.
an example : you state. Instead of trying to advance SENS to push the limits of life extension, those who support it should promote universal healthcare etc... Those other things are not bad goals but are far more costly.
Universal single-payer health care is less costly than the private system it would replace here in America, and it is against that alternative we should judge its costs. The connection of universal single-payer healthcare to programmatic facilitations of longevity and rejuvination medicine is more general in my view -- namely, I fear that unless healthcare is implemented as a planetary human right then modification medicine (including longevity medicine) will risk exacerbating already brutal dangerously destabilizing planetary inequities into a mode of literal speciation. It is very difficult for me to imagine freedom or democracy surviving such an eventuality.
On the other side of the coin, I think that the politics of implementing universal healthcare on a planetary scale can actually facilitate the emergence of consensual modification medicine. First, I think it is crucial that SENS advocates stop talking about immortality and start redescribing their therapeutic aims as the provision of "healthcare." Second, as healthcare techniques derange basic assumptions about what healthy bodies live like and might be capable of, it seems ever more crucial to me to rethink healthcare less as normative to healthcare as consensual. Introducing modification medicine (like some of what SENS is about) into the basic healthcare frame will facilitate this shift into a consensual from a normative model of healthcare, while at once mainstreaming the aspirations of advocates for rejuvination medicine very much to the benefit of their expressed desires.
A fully funded SENS program might be $1 billion/year versus the $2 trillion healthcare system.
Handwaving aside, the rhetoric of the Longevity Dividend provides a powerful frame for making arguments of this kind. Ask yourself just what work the "versus" is doing in this formulation of yours -- I can see how it might be emotionally edifying in the short term, but it seems to cost much more than its worth in terms of your practical aims over the longer haul that matters as far as I can see.
Certain approaches to achieving goals are just resource inefficient. $2 trillion/year is spent on healthcare and the system is still woefully inadequate, then maybe the solution is not to spend more money on the current system.
As an advocate for universal single-payer healthcare in America and as an advocate for a2k/copyfight, especially including a transformation of intellectual property regimes that benefit big Pharma to the cost of needlessly suffering vulnerable populations, it should be clear that I have little attachment to "the current system." But the wider context here is rhetoric -- a question of the ways in which widely held assumptions, characteristic metaphors and frames, discursive history and inertia, and prevailing argumentative formulations provide the force of intuitive plausibility on which we must depend when we make our cases for change. This "system" or "language" of health, consent, fairness, agency, emancipation must be understood, appropriated, opportunistically redeployed if one wants to transform our sense of the possible and the important and so transform what we are capable of.
Society can achieve more with technology and systems by spending smarter and adjusting how things are done.
Sure, my only quibble -- and you may not even disagree with this actually -- is that "technology" is not separate from the practices to which you refer as "spending smarter and adjusting how things are done." As I say here all the time, what people refer to as "technology" through the conjuration of a metaphor of an accumulating toypile of useful instruments seems to me incomparably better understood through the conjuration of the metaphor of public process, like language-use or a collective barn-raising or a contentious debate, "technodevelopmental social struggle."
Thanks for your serious and considerate and interesting reading of my post.
This second response seems to me rather less interesting than the first one. You've gone from the initial provocation to a smoothing over that is returning you to your customary assumptions. Now you feel content to say that my post is a matter of "railing" (unserious emotionalism), opposition to "outliers" (timid unserious conservatism), complacent approval of the status quo -- "solutions... which already have a lot of people... working... on (unserious ignorance), and so on.
Of course, nothing could be further, and more conspicuously so, from the truth: peer-to-peer formations are emerging and remain unpredictable in their full force, the recommendation of an informed consensualization and denormalization of basic healthcare discourse is the furthest thing from customary, complacent, ignorant, or sentimental.
This can be a conversational opportunity for us: inhabit the provocation of what I said, such as it is, and see how you might avail yourself of it (including, certainly, in ways I might not particularly approve of), don't drift back to the comfortable orthodoxies of Superlativity (which, again, interests me primarily as a discourse and a subculture producing what seem to me reductive and antidemocratizing effects on technodevelopmental social struggle).
I don't know exactly what to say about:
These appoaches have worked poorly in the past. Things like arms control have had minimal effectiveness. You are asking for the 0.1% or less of people and resources to be diverted from trying approaches and solutions that are completely different for a surge into those old approaches.
The force of the "completely different" here is far more a matter of your metaphors than the facts on the ground as far as I can see (which doesn't mean that I don't agree there are some differences that make a difference here, obviously I do), and the appeal to just these metaphors, I'm afraid, tends to be matters of psychology and subculture more than the questions of technical efficacy and useful advocacy they fancy themselves to be.
2. If these (AI, Advanced Nanotech, life extension/SENS) approaches are wrong and doomed to fail, then why does it offend you so much?
Well, this is not a question of what I like or dislike or am offended by or what have you. Once again, I am a rhetorician by training and by trade and I am here to tell you that [1] you are not likely to achieve the ends you desire through the formulations that presently appeal to you, and that [2] the appeal they have for your bespeaks what looks to me like an imperfect understanding of the complexities of technodevelopmental social struggle in a way you would benefit from rethinking.
In a larger sense, these issues around SENS are just illustrative of the shortcomings of the discourse of Superlative Technnology more generally, which seems to me profoundly mystifying and antidemocratizing for many reasons (stated in the piece we are discussing and elsewhere).
It is because I take technodevelopmental politics so seriously that I take what look to me like the dangers of Superlative Technology discourses so seriously. I often say that democracy without technology will fail, and technology without democracy will destroy the world. Technocentric folks are alive to the stakes of this sort of claim, which makes them good people to devote one's energies to even if they are caught up in what seem to me at best silly and at worst pernicious Superlativity.
These industries are smaller then the TV, movie and other entertainment industries. They are smaller than the military industrial complex. Smaller than tobacco and alcohol. There are deeper pockets to go after and more destructive industries to divert. These industries are smaller and more benign than a lot of other segments of the economy.
I take your point, but my focus reflects my interests for one thing, it reflects my sense of the emerging institutional terrain and its budgetary priorities, and it reflects my sense that Superlative discourse and the practices it shapes, however modest they may be for now, comparatively speaking (since, frankly, they are already enormously lucrative and exert an unprecedented tug on the popular imagination), are such a snug fit with prevailing assumptions of political incumbents and neoliberal precarization -- in ways I've discussed elsewhere at length -- that my resistance to the latter demands my attention to the former.
That's why I talk about these things as I do. Once again, thanks for your engagement with my thinking here.
I am not against single payer systems. (Before moving to the USA in 1996, I was born and grew up in Canada, which is mostly a single payer system).
However, for me the matter of who pays is a side issue to getting the costs and demand for healthcare down.
0. I would argue that the automation and reduction of costs for universal basic preventative healthcare can be supported by reduction of downstream costs.
(prevention dividend)
1. Air pollution from fossil fuels contribute a lot to the avoidable health problems of the USA and the world. Coal in particular has more pollution and toxins generated per power unit generated.
Outdoor air pollution kills 3 million per year worldwide (World Health Organization). Indoor air pollution kills 1.5 million per year worldwide. Larger numbers are made sick than those how die. Those who die from air pollution are accelerating the societal healthcare costs. 55 million people die from all causes.
As much as 30% of healthcare cost in the USA are caused by air pollution and fossil fuel toxins like mercury and arsenic.
(A clean air dividend)
2. Extending lives via SENS (and calorie restriction which has been linked in a recent study to improved mitochondrial function, which is one of the seven targets of SENS) would also reduce the annual rate of health care costs if successful.
(The longevity dividend).
3. Reduction of healthcare costs would reduce the costs to business and improve and increase economic performance and activity.
4. Alternative technology and systems should be developed and piloted in small trials and then expanded. The systems should be adjusted to encourage and enable more experimentation so that better ways can be found and proven. I feel that the Google model of 80% of resources on the mainstream and 20% on nearly unlimited experimentation is something approaching the correct ratio. A larger share should be used on prizes based on actually developing various stretch goals, instead of constantly paying for institutions and buildings based on political considerations, reputation without any adjustment based on performance.
5. Proposals should not be penalized because the goals would target superlative performance.
Trying to sell/relabel SENS or other advanced technological approaches under more mundane descriptions as you advise is problematic, because long established definitions and biases in the funding systems and in people making traditional funding decisions had already blocked mainstream funding.
Molecular nanotechnology (MTN) almost received some traditional funding but was outmaneuvered by some traditional business people. Going back in time to the 1980s and downplaying any concerns and just trying to get each step funded and moved forward would have been the best approach for MNT. But that option is mostly past at this point.
The funding of the initial stages to gather scientific evidence requires bootstrap funding from non-traditional sources.
I would agree that where researchers can present a segment of the work as mainstream that they should do so where it might help get it funded from a mainstream source.
In terms of trying to use smaller amounts of money to try a lot of different approaches. I am applying an extension of some of the arguments for open innovation.
http://www.amazon.com/Open-Innovation-Imperative-Profiting-Technology/dp/sitb-next/1578518377
One of the proponents of open innovation tracked the projects that were chosen to have funding terminated by Xerox. He found that those projects that went outside Xerox for funding and then formed companies over 20 years eventually exceeded the entire value of Xerox.
This was a demonstration of the opportunity cost of being unable to accomodate alternative business models and from making errors in judgement about what projects will or will not work. The cost of false negatives.
==Back to relabelling and succeeding in mainstream funding.
Certain kinds of AI actually have large (multi-billion $) mainstream success. Mainstream AI runs a lot of the financial trading in the world. However, many possibly high potential AI goals do not fit within the mainstream structures.
Mainstream nanotechnology also has large funding success, although most of that is relabelled from the chemical industry.
==Tieing the two together
Sometimes the superlative has to take root and thrive outside of the mainstream for a time.
Also, a comment on not knowing when false negative (technological development choice) errors are being made we should strive to develop some efficient societal means to support potentially high value false negatives. If it is a mundane potential false negative then it would not be that bad a mistake and presumable mainstream progress could substitute for it.
===Equality, democracy and technology. Bad individual and societal choices.
I think that some levels of inequality are inevitable when freedom of choice is to be valued. If we want to let someone choose a lifestyle which ultimately makes them poorer then inequality would result when other people make choices that result in greater personal wealth.
If a person chooses never to learn how to become a moderately successful investor in any asset class, then they will end up poorer than those that do.
If a person chooses a career in a field which will not be as highly valued or even chooses no education then that is their choice.
China has had the economic burden of carrying the inefficient state sector. Other societies have the economic burden of carrying the segments of society who make what have proven to be economically bad life choices. Many of those choices could have been shown at the time that they were being made to be destined to almost certain to be bad. Choosing alcohol, tabocco and drug use. Choosing education in poorly compensated areas. Choosing inadequate education (like stopping before the end of high school). Choosing not to figure out aspects of how to become economically successful.
Societies makes what turn out to be bad technological funding choices. The NASA space program is often cited as technological funding. However, I would say it is predominantly driven as projects for political support with technology development or goals as a secondary or tertiary side effect and cover story.
There is a fraction of societal resources devoted to international and domestic charity, aid and support programs. A more rich and capable society will be able to continue those programs in an expanded way by just not lowering the fraction that is devoted to it. The China example shows that even though high growth increased inequality, more people were raised out of poverty by allowing high growth to occur. China also shows that it is not in the interest of those well off to not help those who were not able to benefit. The interest are prevention of societal unrest and the increasing in the numbers able to participate as consumers and as contributors to societal development.
There's plenty to agree with in what you say in your last two posts. If I can find the time (I'm being called away to prep for tomorrow's lectures) I'll dig deeper in a couple day's time. Thanks for your continued interest and engagement.
Your mean-spirited and frankly hateful attacks on people holding a minority viewpoint based purely on hope for a brighter future do you little credit. You trumpet plural stakeholder discourse, but as soon as you see it, you respond with vicious personal insults. Your "democratization" looks more like "demonization" to me.
Your mean-spirited and frankly hateful attacks on people holding a minority viewpoint based purely on hope for a brighter future do you little credit.
What nonsense. People in weird robot cults should get a thick skin even if they can't get a life. See, now that's something like a vicious, hateful attack.
The post you're whining about, to the contrary, actually did you the service of taking you more seriously -- offering up arguments and analyses with which you can disagree or not, as you see fit, for example -- than almost anybody else would who thinks your discourse is as ridiculous and pernicious as I truly happen to do. (And for reasons I delineated at great length.)
Leave it to a subcultural technophiliac, though, to mistake critique for defamation. Persecuted minority, my foot. And by the way, there's a difference between hope and hype.
Your "democratization" looks more like "demonization" to me.
Your failing eyesight may need "enhancement" even more than your failed insights, then. Pa-DUM-bum.
Wha...? Anonymous, I guess I'm grateful to Dale for bringing folks like you out of the woodwork, so to speak. I'm willing to give a *lot* of respect to a lot of fairly "fringe" viewpoints (especially considering I probably hold some of them), and I did not see any vicious personal attacking going on until your comment showed up. No viewpoint is beyond criticism, regardless of how much "hope" it is based on. Sheesh.
Dale,
Don't worry about the tripe posted by anon and his ilk. Believe me, you're far far too kind to these crack-pot/psycho/nutter Singulartians, Libertarians and associated kooks.
I was there on the Sl4 and Extrropy lists listening to that woeful tripe from 2002-2006, 4 wasted years. The Singluaritarians spent the whole time stuttering and prancing around like a bunch of utter lunatics - all they ever did was feed me a bunch of crack-pot ideas about AI and tell me how smart they were , how they and only they had the answers and how only people as smart as them were fit to even consider the issues.
I actually pointed out the central AI problem of 'Reflectivity' on the SL4 post years before Yudkowsky even recognized it.
I pointed out the central AI problem with godel several years before yudkowsky started talking about 'the problem of reflectivity'. Yudkowsky's reply to me on the list (it's all on record you know):
'Oh I haven't done mathematical logic yet'.
"No doubt only their therapists (or possibly, for a few of them at least, their financial advisers) know for sure."
Sure looks like a personal attack to me.
Sure looks like a personal attack to me. I guess so, if you want to be a complete stick in the mud about it. I would have thought that the ironic citation of the campy phrase "only her hairdresser knows for sure," would have signalled, rather in the way a smiley does in other online contexts, that the closing comment was more warmly than viciously intended. But, you know, whatev. The fact is my style tends to the acerbic anyway, so if you incline to a certain tiresome variation of earnestness you're going to find much to disapprove of in my style of writing from beginning to end. Your loss, let a bazillion flowers bloom, and whatnot is all I can say.
Marc, I don't agree that all people drawn to Superlative Technology discourses are crackpots (though a nonnegligible number of them may well be), I don't agree that we can simply dismiss ideas, movements, and discourses just because they are crackpot, since we see all around us the evidence of the damage that can be donbe by well-organized crackpot ideologies like corporate-militarist neoliberalism and neoconservatism, and finally I don't agree that crackpot discourse is inevitably uninteresting or unworthy of attention on its own terms inasmuch as often these discourse symptomize other attitudes and forces abroad in culture that demand understanding and intervention. I take superlative and sub(cult)ural technocentrisms very seriously indeed, even if, like you perhaps, I don't always take individual partisans of these discourses particularly seriously. I hope that helps explain where I am coming from, thanks for the interest.
Post a Comment