Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All
Saturday, October 06, 2007
Singularitarian Agony
I think the following e-mail exchange will interest those of you who are following my latest engagements with partisans of various Superlative discourses (I omit names to protect innocence here).
In the discussions that have recently taken place at your blog, you've demonstrated a strong belief that entitative human-surpassing AI will not be possible within a meaningful timeframe
It's true I am incomparably more skeptical of such an eventuality than Singularitarians seem to be, but, rather true to form, you are not quite grasping the reasons that fuel my skepticism, nor the intended force of my critique.
I'm sure you will agree that there is little point in arguing with a person who insists that "God exists" on their own terms as they begin unspooling their interminable "proofs" to that effect, when you don't believe the utterance "God exists" is coherent conceptually in the first place, especially given the weight and significance that such godly existence is presumably required to bear for the believer.
In much the same way (the analogy isn't exact, but I think you will get the point) I believe that the idea of entitative human-surpassing post-biological superintelligence is deeply problematic conceptually in its assumed Singularitarian usages. Indeed, I think it amounts for the Singularitarians to a farrago of pretenses to adequate knowledge (of embodied consciousness as such, of the plural modes of intelligence, of values and morals as such, not to mention of the state of the art in computer science), of embarrassingly loose analogies, of facile reductions of palpable complexities and so on.
It should come as no surprise to understand, then, why I think that the typical insistence by Singularitarians that "serious" critics of their curious preoccupations must address themselves to "technical" questions that shunt aside all such difficulties and focus instead on number crunching the Robot God Odds as though we all know what a Robot God would consist of in the relevant sense is a completely self-serving changing of the subject any time anybody comes near to grasping the arrant foolishness at the heart of the whole enterprise.
As I have said before I am a materialist about mind and a pluralist about intelligence so I am not so easy to dismiss as you might like as some muzzy mystic or whatever when I tell you that you people simply don't seem to know what you are talking about when you talk about the "entity" who would presumably exhibit your post-biological intelligence, when you talk about the "intelligence" that would be "super"iorized, when you invest "emulation" and "simulation" with the significances you need, when you glibly presume inter-translatability between modes of materially incarnated states of "consciousness," which you already reduce problematically a dozen ways to Sunday, when you assume what you take to be an "objective" perspective on consciousness, usually to the denigration of what you take to be "subjective" perspectives, and heaven only knows how you would ever cope with "inter-subjective" dimensions of consciousness if you were ever bothered to take the performative dimension of material "capacities" seriously, when you make the pronouncements about "friendliness" and "unfriendliness" which invest your discourse with much of its motivational urgency, and so on.
All the equations and confident predictions in the world won't paper over the conceptual incoherence and inadequacy of your assumptions and your aspirations as they tend to play out in your discourse (and, worse, as they play out ethnographically in the organizational sub(cult)ure that you so forcefully identify with this conceptual content). It comes as little surprise, given all this, to discover that the tight-knit clatch of believers in this technodevelopmental program go on to exhibit many of the organizational features one discerns in cults and fundamentalist social formations, it comes as little surprise to grasp the ways in which the transcendentalizing and hyperbolic claims projected onto the "future" with which these believers are preoccupied seem, on closer inspection, so often in individual cases to symptomize the attitudes of these believers toward their contemporary circumstances, expressions of alienation from their fellows or from their bodies (or both), expressions of scarcely concealed frustration at perceived social slights, exhibitions of narcissism, superiority and inferiority complexes, and so on, traumatic discourses playing out fears of a loss of control and fantasies of omnipotence, and on and on and on.
From here I go on to critique the ways in which the technocratic elitism, the reductionism, the determinism, and the developmental autonomy of the "technical" dimension usually explicitly and almost always implicitly expressed in Singularitarian discourses plays into the hands of the politics of incumbency, which means, as often as not, North Atlantic exceptionalism and corporate-militarist globalization. I also point to the ways in which -- like all the modes of Superlative Technology Discourse -- Singularitariansim activates irrational passions (uncritical greed, blind panic, cynicism, passive acquiescence to so-called transcendental forces, reductive identification with ideological vocabularies claiming to possess the "keys to history," and so on) and so deranges the urgently necessary work of sensible democratic deliberation about ongoing technodevelopmental questions on a planetary scale in a networked world.
The critique has several dimensions, then, conceptual, practical, symptomatic, organizational/sub(cult)ural, and political. These dimensions are inter-implicated but not reducible to one another. I daresay it takes only grasping the force of the critique at one of these levels to realize that one should surely take Singularitarians with a healthy grain of salt, but to understand why I take Singularitarianism seriously as a symptom while not taking it seriously at all as a discourse on its own terms one would probably have to understand the critique in all its dimensions.
You've demonstrated a need to vehemently label those of my kind as "Robot Cultists", "pinheads" etc.
"A need to vehemently label"? I'm just offering up labels that seem to me descriptively adequate. I like your phrase "those of my kind," though. It's honest. One might almost say you are demonstrating a need to vehemently identify with a marginal sub(cult)ure, in the face of a critique that emphasizes different sorts of issues than the ones you normally think about or issues that you don't prioritize particularly. Another option in the face of such a critique would be to eschew the tribalist impulse, recognize that "if the shoe fits, wear it," and otherwise proceed as any decent analytic thinker would and try to propose a few salient distinctions to cope with some conceptual tensions that you had not noticed or taken seriously hitherto.
You seem quite scared
No I don't. I don't mean to dwell on this, but if I seem scared to you, I daresay you are even farther gone than you realize. Singularitarianism symptomizes larger issues that interest me in Superlative technodevelopmental discourse generally. Ask yourself what work is getting done for you personally when you mistake this interest as fear. This is a very common response I get from Singularitarians I've annoyed, and it seems to me enormously indicative.
that if the scenarios we talk of were acknowledged as something the risks of which need to be taken into account when planning for the future, this would play into "any number of already-existing neoliberal and neoconservative agendas".
As I have stated on more than one occasion, in more than one way, yes, I do think Singularitarian formulations derange our capacity to talk sensibly peer-to-peer about actual and urgent technodevelopmental quandaries here and now. I think this effect matters more as it disseminates into general tech-talk than as it is expressed within the confines of the marginal sub(cult)ure of "your kind," but it is best to nip this sort of the thing in the bud if one can manage it. And yes I think that in some respects default Singularitarian discourse is an especially nice fit for certain neoliberal and neoconservative agendas that I oppose. One more reason to understand what is afoot and decry it publicly as best I can.
I doubt that you can present an argument for the infeasibility of brain emulation (within 50 years or so) that a responsible person could accept.
"A responsible person" meaning, one expects, a current member or likely candidate to join the Robot Cult of "your kind"? You're quite right, I doubt I can present an argument that would dissuade True Believers from their faith, given the psychic work that faith is likely to be doing for them.
In the absence of such an argument, a responsible person should plan for the risks of AI, even if it forces them to re-evaluate their priorities.
If you really believed that -- if you really believed that the only way one can talk about these issues is on the reductive terms you prefer (and only on the basis of which one can be bamboozled into the sort of skewed priorities exhibited by so many Singularitarians) -- then, of course, you wouldn't have sent me this e-mail.
Could you elaborate on how you are able to convince yourself that e.g. brain emulation can't possibly in a meaningful timeframe produce computer programs of human-equivalent (and shortly thereafter, human-surpassing) intelligence?
It's far better for you people to explain calmly how exactly you became the sorts of folks who stay up at night worrying about the proximate arrival of Unfriendly Omnipotent Robot Gods given the sorry state of the world (and computer science) at the moment. And, especially since I assume that you do not make the mistake of thinking that a picture of you is you, one wonders why you think the differently complex pictures and profiles and data aggregations you are calling "emulations" would be you either.
Do you in fact realize that not much else is required for brain emulation to achieve such results, than decent brain-scanning technology and sufficient computing power to run brain simulations based on the scanner data?
No, I don't "realize" that. I think you are flabbergastingly wrong to think you have "realized" that when all you are actually doing is asserting it without really knowing enough about the phenomena of consciousness, of selfhood, of intelligence to justify your faith.
Embodiment etc. are not difficult issues in this context, as it's possible to place the emulated brain in a virtual body in a virtual environment. That emulated intelligence can then be run at speeds as high as available computing hardware permits.
Uh-huh.
This would easily lead to an intelligence explosion scenario, as armies of emulated scientists, possibly running at a million-fold subjective speed-up, are put to work e.g. on the problem of truly understanding intelligence, as well as improving their own code in simpler ways.
After the "intelligence explosion" the "problem of truly understanding intelligence" will proceed, involving -- one must assume, since you already have assumed it even while admitting the knowledge on the basis of which such an assumption would be made remains a "problem" -- "improving... code."
Hail, hail, the gang's all here: transcendental imagery, gaps of knowledge exhibited while claims to total knowledge of the same are handwaved away in the very same sentence, ludicrously overconfident uncaveated predictions, facile reductionism all over the place, incredibly freighted analogies and metaphors doing all sorts of heavy lifting argumentatively but without anything in the way of critical awareness, "armies," "running," "subjective speed-up," "code," "improving [identified with] simpler," and so on. And look how few words of yours managed to pack in all that loose talk and all that uncaveated craziness. This is completely par for the course in talking to "your kind," I'm afraid.
(And handling the risks of an intelligence explosion scenario certainly doesn't require allowing neoconservatives/etc to be all nasty. It's possible to get them to behave regardless. At the very latest, the intelligence explosion -- if decently managed -- would transform economic realities so thoroughly that e.g. profit motives for causing misery to humans would have essentially disappeared.)
The "intelligence explosion" has been monolithicized into a longed-for apocalyptic Event in the space of a few short sentences, never characterized, indeed freighted with uncertainties in its initial formulation, but now somehow a palpable object invested with fantastic clarity and substance by the faithful, taking on all sorts of unsubstantiated entailments, elite manageableness, assurances of benevolence and abundance, in short, Millenium.
Those who have been following my spate of posts on Superlativity will be intrigued to notice that the last claim here re-iterates in the mode of Superlative Superintelligence the gesture I expose in the "Nanosantalogists": Technodevelopment is first naively misconstrued as a socially indifferent accumulation of useful techniques and then invested with a fantastic teleology attaining in the direction of a superabundance that, we are then promised, will circumvent the impasse of stakeholder politics.
As I have said time and time again, planetary scarcity is largely already maintained artificially in the service of the profit and authority of incumbents. Technoscience expresses finitude, it doesn't trump it. If one truly wants upcoming technodevelopmental social struggle to direct itself to the task of meeting the needs of all human beings and emancipating conscious beings or what have you, then one must begin to insist that these ends articulate contemporary distributions of technoscientific costs, risks, and benefits.
Endlessly projecting utopian outcomes -- especially "automatic" outcomes requiring nothing in the way of social struggle -- onto projected futures is in fact a deeply reactionary political gesture (retro-futurist, in my terms), and all the worse for masquerading as progressive or allowing reactionaries the warm fuzzy feeling of being the progressives they aren't. Again, par for the course.
In the discussions that have recently taken place at your blog, you've demonstrated a strong belief that entitative human-surpassing AI will not be possible within a meaningful timeframe
It's true I am incomparably more skeptical of such an eventuality than Singularitarians seem to be, but, rather true to form, you are not quite grasping the reasons that fuel my skepticism, nor the intended force of my critique.
I'm sure you will agree that there is little point in arguing with a person who insists that "God exists" on their own terms as they begin unspooling their interminable "proofs" to that effect, when you don't believe the utterance "God exists" is coherent conceptually in the first place, especially given the weight and significance that such godly existence is presumably required to bear for the believer.
In much the same way (the analogy isn't exact, but I think you will get the point) I believe that the idea of entitative human-surpassing post-biological superintelligence is deeply problematic conceptually in its assumed Singularitarian usages. Indeed, I think it amounts for the Singularitarians to a farrago of pretenses to adequate knowledge (of embodied consciousness as such, of the plural modes of intelligence, of values and morals as such, not to mention of the state of the art in computer science), of embarrassingly loose analogies, of facile reductions of palpable complexities and so on.
It should come as no surprise to understand, then, why I think that the typical insistence by Singularitarians that "serious" critics of their curious preoccupations must address themselves to "technical" questions that shunt aside all such difficulties and focus instead on number crunching the Robot God Odds as though we all know what a Robot God would consist of in the relevant sense is a completely self-serving changing of the subject any time anybody comes near to grasping the arrant foolishness at the heart of the whole enterprise.
As I have said before I am a materialist about mind and a pluralist about intelligence so I am not so easy to dismiss as you might like as some muzzy mystic or whatever when I tell you that you people simply don't seem to know what you are talking about when you talk about the "entity" who would presumably exhibit your post-biological intelligence, when you talk about the "intelligence" that would be "super"iorized, when you invest "emulation" and "simulation" with the significances you need, when you glibly presume inter-translatability between modes of materially incarnated states of "consciousness," which you already reduce problematically a dozen ways to Sunday, when you assume what you take to be an "objective" perspective on consciousness, usually to the denigration of what you take to be "subjective" perspectives, and heaven only knows how you would ever cope with "inter-subjective" dimensions of consciousness if you were ever bothered to take the performative dimension of material "capacities" seriously, when you make the pronouncements about "friendliness" and "unfriendliness" which invest your discourse with much of its motivational urgency, and so on.
All the equations and confident predictions in the world won't paper over the conceptual incoherence and inadequacy of your assumptions and your aspirations as they tend to play out in your discourse (and, worse, as they play out ethnographically in the organizational sub(cult)ure that you so forcefully identify with this conceptual content). It comes as little surprise, given all this, to discover that the tight-knit clatch of believers in this technodevelopmental program go on to exhibit many of the organizational features one discerns in cults and fundamentalist social formations, it comes as little surprise to grasp the ways in which the transcendentalizing and hyperbolic claims projected onto the "future" with which these believers are preoccupied seem, on closer inspection, so often in individual cases to symptomize the attitudes of these believers toward their contemporary circumstances, expressions of alienation from their fellows or from their bodies (or both), expressions of scarcely concealed frustration at perceived social slights, exhibitions of narcissism, superiority and inferiority complexes, and so on, traumatic discourses playing out fears of a loss of control and fantasies of omnipotence, and on and on and on.
From here I go on to critique the ways in which the technocratic elitism, the reductionism, the determinism, and the developmental autonomy of the "technical" dimension usually explicitly and almost always implicitly expressed in Singularitarian discourses plays into the hands of the politics of incumbency, which means, as often as not, North Atlantic exceptionalism and corporate-militarist globalization. I also point to the ways in which -- like all the modes of Superlative Technology Discourse -- Singularitariansim activates irrational passions (uncritical greed, blind panic, cynicism, passive acquiescence to so-called transcendental forces, reductive identification with ideological vocabularies claiming to possess the "keys to history," and so on) and so deranges the urgently necessary work of sensible democratic deliberation about ongoing technodevelopmental questions on a planetary scale in a networked world.
The critique has several dimensions, then, conceptual, practical, symptomatic, organizational/sub(cult)ural, and political. These dimensions are inter-implicated but not reducible to one another. I daresay it takes only grasping the force of the critique at one of these levels to realize that one should surely take Singularitarians with a healthy grain of salt, but to understand why I take Singularitarianism seriously as a symptom while not taking it seriously at all as a discourse on its own terms one would probably have to understand the critique in all its dimensions.
You've demonstrated a need to vehemently label those of my kind as "Robot Cultists", "pinheads" etc.
"A need to vehemently label"? I'm just offering up labels that seem to me descriptively adequate. I like your phrase "those of my kind," though. It's honest. One might almost say you are demonstrating a need to vehemently identify with a marginal sub(cult)ure, in the face of a critique that emphasizes different sorts of issues than the ones you normally think about or issues that you don't prioritize particularly. Another option in the face of such a critique would be to eschew the tribalist impulse, recognize that "if the shoe fits, wear it," and otherwise proceed as any decent analytic thinker would and try to propose a few salient distinctions to cope with some conceptual tensions that you had not noticed or taken seriously hitherto.
You seem quite scared
No I don't. I don't mean to dwell on this, but if I seem scared to you, I daresay you are even farther gone than you realize. Singularitarianism symptomizes larger issues that interest me in Superlative technodevelopmental discourse generally. Ask yourself what work is getting done for you personally when you mistake this interest as fear. This is a very common response I get from Singularitarians I've annoyed, and it seems to me enormously indicative.
that if the scenarios we talk of were acknowledged as something the risks of which need to be taken into account when planning for the future, this would play into "any number of already-existing neoliberal and neoconservative agendas".
As I have stated on more than one occasion, in more than one way, yes, I do think Singularitarian formulations derange our capacity to talk sensibly peer-to-peer about actual and urgent technodevelopmental quandaries here and now. I think this effect matters more as it disseminates into general tech-talk than as it is expressed within the confines of the marginal sub(cult)ure of "your kind," but it is best to nip this sort of the thing in the bud if one can manage it. And yes I think that in some respects default Singularitarian discourse is an especially nice fit for certain neoliberal and neoconservative agendas that I oppose. One more reason to understand what is afoot and decry it publicly as best I can.
I doubt that you can present an argument for the infeasibility of brain emulation (within 50 years or so) that a responsible person could accept.
"A responsible person" meaning, one expects, a current member or likely candidate to join the Robot Cult of "your kind"? You're quite right, I doubt I can present an argument that would dissuade True Believers from their faith, given the psychic work that faith is likely to be doing for them.
In the absence of such an argument, a responsible person should plan for the risks of AI, even if it forces them to re-evaluate their priorities.
If you really believed that -- if you really believed that the only way one can talk about these issues is on the reductive terms you prefer (and only on the basis of which one can be bamboozled into the sort of skewed priorities exhibited by so many Singularitarians) -- then, of course, you wouldn't have sent me this e-mail.
Could you elaborate on how you are able to convince yourself that e.g. brain emulation can't possibly in a meaningful timeframe produce computer programs of human-equivalent (and shortly thereafter, human-surpassing) intelligence?
It's far better for you people to explain calmly how exactly you became the sorts of folks who stay up at night worrying about the proximate arrival of Unfriendly Omnipotent Robot Gods given the sorry state of the world (and computer science) at the moment. And, especially since I assume that you do not make the mistake of thinking that a picture of you is you, one wonders why you think the differently complex pictures and profiles and data aggregations you are calling "emulations" would be you either.
Do you in fact realize that not much else is required for brain emulation to achieve such results, than decent brain-scanning technology and sufficient computing power to run brain simulations based on the scanner data?
No, I don't "realize" that. I think you are flabbergastingly wrong to think you have "realized" that when all you are actually doing is asserting it without really knowing enough about the phenomena of consciousness, of selfhood, of intelligence to justify your faith.
Embodiment etc. are not difficult issues in this context, as it's possible to place the emulated brain in a virtual body in a virtual environment. That emulated intelligence can then be run at speeds as high as available computing hardware permits.
Uh-huh.
This would easily lead to an intelligence explosion scenario, as armies of emulated scientists, possibly running at a million-fold subjective speed-up, are put to work e.g. on the problem of truly understanding intelligence, as well as improving their own code in simpler ways.
After the "intelligence explosion" the "problem of truly understanding intelligence" will proceed, involving -- one must assume, since you already have assumed it even while admitting the knowledge on the basis of which such an assumption would be made remains a "problem" -- "improving... code."
Hail, hail, the gang's all here: transcendental imagery, gaps of knowledge exhibited while claims to total knowledge of the same are handwaved away in the very same sentence, ludicrously overconfident uncaveated predictions, facile reductionism all over the place, incredibly freighted analogies and metaphors doing all sorts of heavy lifting argumentatively but without anything in the way of critical awareness, "armies," "running," "subjective speed-up," "code," "improving [identified with] simpler," and so on. And look how few words of yours managed to pack in all that loose talk and all that uncaveated craziness. This is completely par for the course in talking to "your kind," I'm afraid.
(And handling the risks of an intelligence explosion scenario certainly doesn't require allowing neoconservatives/etc to be all nasty. It's possible to get them to behave regardless. At the very latest, the intelligence explosion -- if decently managed -- would transform economic realities so thoroughly that e.g. profit motives for causing misery to humans would have essentially disappeared.)
The "intelligence explosion" has been monolithicized into a longed-for apocalyptic Event in the space of a few short sentences, never characterized, indeed freighted with uncertainties in its initial formulation, but now somehow a palpable object invested with fantastic clarity and substance by the faithful, taking on all sorts of unsubstantiated entailments, elite manageableness, assurances of benevolence and abundance, in short, Millenium.
Those who have been following my spate of posts on Superlativity will be intrigued to notice that the last claim here re-iterates in the mode of Superlative Superintelligence the gesture I expose in the "Nanosantalogists": Technodevelopment is first naively misconstrued as a socially indifferent accumulation of useful techniques and then invested with a fantastic teleology attaining in the direction of a superabundance that, we are then promised, will circumvent the impasse of stakeholder politics.
As I have said time and time again, planetary scarcity is largely already maintained artificially in the service of the profit and authority of incumbents. Technoscience expresses finitude, it doesn't trump it. If one truly wants upcoming technodevelopmental social struggle to direct itself to the task of meeting the needs of all human beings and emancipating conscious beings or what have you, then one must begin to insist that these ends articulate contemporary distributions of technoscientific costs, risks, and benefits.
Endlessly projecting utopian outcomes -- especially "automatic" outcomes requiring nothing in the way of social struggle -- onto projected futures is in fact a deeply reactionary political gesture (retro-futurist, in my terms), and all the worse for masquerading as progressive or allowing reactionaries the warm fuzzy feeling of being the progressives they aren't. Again, par for the course.
Subscribe to:
Post Comments (Atom)
11 comments:
I've watched the ongoing debate between Dale and various "singularitarians"
for a while and thought I would finally comment.
I admit that I often find Dale's writing a bit too heavy on continental
philosophical jargon for my tastes but I actually think I finally get
what some of he's talking about. The way I understood this discourse
was to finally connect some of Dale's abstractions with a particular,
concrete problem of mine. The problem is that I have cavities. OK,
stop laughing. I'm serious, I've got some cavities in my molars.
They've been there for a while now and aren't growing very fast but at
some point they will need to be fixed. I have these cavities because
I haven't been to the dentist in years and I haven't been to the
dentist because I don't have insurance and good teeth are useless if
you can't afford food to chew with them. As you have probably guessed by this
paragraph, I live in that benighted land known as the USA.
What does this have to do with the Singularity, nanoSanta (or is it
nanoSatan?), superhuman AI or any of the other stuff talked about
here? A lot actually. What Dale is getting at (my interpretation) in
a lot of his attacks on the singularity is not so much that there's
anything inherently wrong with nanotechnology or the expectation that
it will in some sense make the world "richer" but the fact that the
political dimensions of existing scarcity are just hand-waved away
by most singularitarians and a facile assumption is made that somehow
new technologies will moot any need for social struggle.
I understand this perfectly when I think of my teeth. There's no
technical justification for the state of my teeth. It's not like no
one has yet invented Novocaine, high speed drills and ultra-fine
needles. There's no need to invoke nanoSanta to fix my damned teeth.
Industrial Santa solved all these problems long ago. The real problem
is that specific political decisions have been made in the
particular geopolitical abstraction known as the USA that have
resulted in my inability to secure even the most basic health care, a
problem I would simply not have in other geopolitical abstractions
like the UK, Canada or pretty much any other developed country with a
national health program. The US, for specific political,
ideological reasons has chosen to eschew such systems for two basic
reasons - it "needs" the money that universal health care would cost
to pursue militarist and imperialist policies all over the face of the
globe, and its bizarre political culture glorifies the "free market"
in everything even when it clearly isn't working.
The real problem: The real problem that I see in all this is, just as
Dale has been telling you, basically political and will need to be
addressed on that level. I know it's tempting, as a geek myself, to
think that politics, like some big thug who has been taking your lunch
money can just be outmaneuvered by some kind of technical Jujitsu
where his own strength and size is used against him but the fact is
that such fantasies have been promoted before and have always failed
to play out. Indeed it might help here to read some of what 19th and
early 20th century geek-utopians wrote similar eschatological paeans to
the then new technologies of steam and electricity. Oh, there was
progress yes, emphatically yes! The work week got shorter, factories
got safer, public health improved and a lot of this would have been
impossible or very difficult without advanced technology. But there
was always a struggle and every inch of this ground had to be taken
from the ruling class, often by force or threat of force.
So what of nanoSanta? Won't he make us all rich and healthy with
almost no effort? Do I doubt the feasibility of nanoassemblers or AI?
Actually, I'm rather less skeptical about this stuff that Dale seems
to be. I'm no anti-Drexler nut screaming about how "it'll all turn
into glass, Glass, I tells ya!". The problem is that the whole world
(and particularly the part called the USA) is basically organized at
this point like a kind of sleazy mobbed-up New Jersey neighborhood.
Basically, nothing can go on here without the approval of The Don.
I'll spell it out for you, the Don is the investor class, the CEOs and
corporate board members, the rich. So let's say nanoSanta shows up in
this bad neighborhood one day. How long is he going to last? Not
long. Like Industrial Santa before him he's going to be approached by
the Don or some of his henchmen and either bought off or, if he tries
to make some moral stand, smacked around until he complies. This is
easy to understand. The Don can't just have some punk-ass upstart
giving out freebies on his turf. If that goes on long enough people
will start to feel like maybe they don't need The Don anymore. They
might start to see The Don for what he is, a rank parasite and thug
who makes their lives a lot more miserable than the mere physical
limits of things would demand.
You can already see this process at work. NanoSanta has already met
with The Don's capos. Much "nanotech" work is already disappearing
into the military industrial "classified" black hole, to re-emerge
later as horrific super-weapons to further prop up the status quo, not
immortality medicine or free toys for all. A few scientists and
engineers will take a stand and won't be bought off. I've known a few
who have in fact. In the current system though, they are doomed. But
that's the point - "in the current system". It's up to all of us to
throw that system on the trash heap of history as fast as possible,
not simply so that we can get into nanoSanta's toy bag but just so
that we can live at all. This is the only part of the Singularitarian
eschatology that I do accept. There's absolutely EVERYTHING at
stake here. Not just our own individual lives but the species itself.
Might as well post here my response to Dale...
> As I have said before I am a materialist about mind and a pluralist about
> intelligence so I am not so easy to dismiss as you might like, no doubt,
> when I tell you that you people simply don't know what you are talking
> about when you talk about the "entity" who would presumably exhibit your
> post-biological intelligence, when you talk about the "intelligence" that
> would be "super"iorized, when you invest "emulation" with the significance
> you need, when you glibly presume inter-translatability between modes of
> materially incarnated "consciousness," which you already reduce
> problematically a dozen ways to Sunday, when you assume what you take to
> be an "objective" perspective on consciousness, usually to the denigration
> of "subjective" perspectives, and heaven only knows how you would ever
> cope with "inter-subjective" dimensions of consciousness if you were every
> bothered to take the performative dimension of material "capacities"
> seriously, when you make pronouncements about "friendliness" and
> "unfriendliness," and so on.
>
> All the equations and confident predictions in the world won't paper over
> the conceptual incoherence of your assumptions and your aspirations as
> they tend to play out in your discourse
I'm not the one making confident predictions here, *you* are. You are
confident enough of Brain Emulation being unable to produce a
human-equivalent cognitive system, that you label as a Robot Cult
those of us who think that such a development is a possibility that
can't be comfortably ruled out.
You make long lists of accusations, quite beside anything I've said. I
haven't even talked about "consciousness" at all. For all I know, a
brain emulation might perform cognitive processing as a non-conscious
entity. (But no, I'm not assuming that either.)
>> Could you elaborate on how you are able to convince yourself that e.g.
>> brain emulation can't possibly in a meaningful timeframe produce
>> computer programs of human-equivalent (and shortly thereafter,
>> human-surpassing) intelligence?
>
> It's far better for you people to explain calmly how exactly you became
> the sorts of folks who stay up at night worrying about the proximate
> arrival of Unfriendly Omnipotent Robot Gods given the sorry state of the
> world (and computer science) at the moment.
So your answer is no. You refuse to answer the one question I presented to you.
> I think that the typical insistence by Singularitarians that "serious" critics of
> their curious and marginal preoccupations must address themselves to
> technical questions that shunt aside all such difficulties and focus instead on
> number crunching the Robot God Odds as though we all know what a Robot
> God would consist of in the relevant sense is a completely self-serving
> changing of the subject any time anybody comes near to grasping the arrant
> foolishness at the heart of the whole enterprise.
I'm not changing the subject. I *started* this conversation with a
direct question to you, a question you refuse to answer. You are the
one shunting aside difficulties, preferring to focus on assorted
accusations of cultishness.
>> I doubt that you can present an argument for the infeasibility of
>> brain emulation (within 50 years or so) that a responsible person
>> could accept.
>
> "A responsible person" meaning, one expects, a current member or likely
> candidate to join the Robot Cult of "your kind"? You're quite right, I
> doubt I can present an argument that would dissuade True Believers from
> their faith, given the psychic work that faith is likely to be doing for
> them.
No, that is not what "a responsible person" means.
If you are ever able to move past your apparent need to ridicule as
"Robot Cultishness" the questioning of some of your assumptions, let
me know.
Greg: Yes, you are right.
One key part of my problem with Superlative Discourses -- from a political standpoint -- is that that they facilitate the endless dream deferred, assuming progress is a matter of an indifferent accumulation of technical capacities satisfying ever more wants rather than a matter of social struggle among a diversity of stakeholders who share a world which we can direct to the problem of injustice now quite as easily as in the future, and which we can defer in the future just as easily as so many do now.
A second political problem is that I think Superlative discourse deranges sensible deliberation at a time when sensible discourse is desperately needed, by activating the irrational passions of agency (fears of impotence, desires for omnipotence) which usually accompany "technology-talk" (and for the obvious reason that technology is nothing but the prosthetic elaboration of agency).
A third political problem is that its tendencies to reductionism facilitate dismissal of diverse actually-existing aspirations with which we must reckon even where we personally disagree with them if we embrace democracy as we must, while its elitism -- technocratic at best, expressive of authoritarian sub(cult)ural and/or fundamententalist True Belief at worst -- provides rationales for neoliberal/neoconsersative corporate-militarist politics of incumbency (when the reductionism takes on the even worse forms of market naturalism and genetic determinism this latter tendency is exacerbated in the extreme).
My critique of Superlative Technocentrisms has other dimensions as well -- among them that I think their typical hyperbole, unilateralism, oversimplification, uncritical obliviousness to the work of figurative language in its own discourse (not to mention in the practice of science, and as a key articulator of technoscientific change more generally) limits their ability to facilitate the very outcomes that their partisans would claim define them, a grasp of future technodevelopments, some foresight.
Oh, and I think many people who engage in Superlative Discourses are straight-up cultists. That's another dimension of the critique.
Thanks for the comments!
(PS: My teeth are in disastrous condition also, gotta love Amurrica.)
Aleksei, I fear, is growing annoyed with me:
I'm not the one making confident predictions here, *you* are. You are confident enough of Brain Emulation being unable to produce a human-equivalent cognitive system, that you label as a Robot Cult those of us who think that such a development is a possibility that can't be comfortably ruled out.
Nonsense. My point is that you have jumped the gun. You have just made a handful of facile leaps that lead you to think what you call emulation will spit out a Robot God Brain and then, once the leap is made, you think all that is left is to calculate the Robot God Odds as to how many years it will take to get to the Tootsie-Roll Center of the Tootsie-Pop. I'm neither confident nor unconfident about timescales -- I'm just confident that your confidence is flabbergastingly unwarranted.
And I'm afraid I simply must call bullshit on your oh-so-reasonable characterization of Singularitarians as "those of us who think that such a development is a possibility that can't be comfortably ruled out," because that characterization would make me a Singularitarian. What actually makes one Singularitarian is clear upon even a cursory survey of the actual published readily available (too bad for you, cultists) discussions which suggest rather forcefully that you take these "possibilities" as near certainties, and certainly as urgencies, while your topical emphases, your policy priorities, your assessments of the concerns of your contemporary peers immediately, obviously, and damningly reveal the truth of the matter.
I haven't even talked about "consciousness" at all. For all I know, a brain emulation might perform cognitive processing as a non-conscious entity.
I'm glad to hear it. Take out the entitative dimension of AI, however, and all the risks and powers you're talking about become far too conventional to justify the way Singularitarians keep casting about for monster movie metaphors about a space race between the evil or clueless teams who might create Unfriendly AGI and the heroic Singularitarians who will beat them by creating Friendly AGI first (and I shudder to think what a sociopath will regard as Friendly on this score). Take the entity out, and you've just got recursive malware runaway, something like a big bulldozer on autopilot that you have to stop before it slams into the helpless village or whatnot. None of the Singularitarian handwaving or secret handshakes or SL4, dude! self-congratulation of the sub(cult)ure is much in point anymore.
The cult vanishes and you're just talking about software security issues like everybody else. Just like puncturing the Superlativity of the Technological Immortalists leaves you talking about healthcare like everybody else. Just like puncturing the Superlativity of the Nanosantalogists leaves you talking about free software, regulating toxicity at the nanoscale, and widening welfare entitlements just like everybody else. Drop the transcendentalizing, hyperbolzing discourse and suddenly you're in the world here and now with your peers, facing the demands of democratizing ongoing and proximately upcoming technodevelopmental social struggle.
Just like I've been saying over and over and over again. You can't be technoprogressive and Superlative at the same time -- but technoprogressive discourse won't feed your ego, won't give you a special identity, won't promise you transcendence, won't bolster your elitism or narcissism, and won't readily facilitate a retro-futural rationalization for the eternal articulation of technodevelopment in the interests of incumbents. That's what I'm talking about. If that doesn't interest you, you are quite simply in the wrong place.
You demanded an explanation of why I think you are wrongheaded, but in the technical terms of your own idiosyncratic discourse rather than the perfectly legitimate terms that actually interest me by temperament and training. I replied by pointing out that in my view, "It's far better for you people to explain calmly how exactly you became the sorts of folks who stay up at night worrying about the proximate arrival of Unfriendly Omnipotent Robot Gods given the sorry state of the world (and computer science) at the moment."
You replied:
So your answer is no. You refuse to answer the one question I presented to you.
Big talk, guy, but you mustn't forget that I'm not a member of your Robot Cult. There aren't enough of you for you to think that you have earned the right to demand that those who disagree with you accept your terms when we want to express our skepticism of your extraordinary claims and curious aspirations. You should consider this a reality check. You need to stop engaging in self-congratulatory circle-jerks with your True Believer friends and struggle to communicate your program in terms the world will understand as they themselves present these terms to you. I cheerfully recommend this because I think the brightest folks among you will likely re-assess their positions once they try to engage in this sort of translation exercise. Those who don't will be that much easier for the likes of me to skewer. If I'm wrong about you, then of course the Singularitarians Will Prevail or whatever -- but that isn't actually something I stay up at night worrying about.
I'm not changing the subject. I *started* this conversation with a direct question to you, a question you refuse to answer.
It isn't clear to me that anything you would count as an adequate answer wouldn't already embed me within the very discourse I'm deriding. What on earth is in it for me? I don't want to join in your Robot Cult Reindeer Games. The prospect holds no allure.
You are the one shunting aside difficulties, preferring to focus on assorted accusations of cultishness.
Have you ever argued with a longstanding Scientologist? I'm just asking.
If you are ever able to move past your apparent need to ridicule as "Robot Cultishness" the questioning of some of your assumptions, let me know.
I enjoy ridiculing the ridiculous, it's exactly what they deserve. It's not an "apparent need" of mine so much as certainly it is a profound pleasure. Feel free to continue to read and comment on my writing whenever you like, as you have been. I enjoy these little talks of ours. As for my sad inability to question my orthodox assumptions in matters of Robot Cultism, it is, no doubt, as you suggest, a sorry and sordid state of affairs for me. It is a hard thing to be so limited as I am. Persevere, earnest Singularitarian footsoldier, and perhaps one day I might see the Light as you have, someday I might hear as keenly as do you the proximate tonalities of the Robot God.
Aleksei Riikonen, habitue of SL4 and self-appointed
Defender of the Faith on the WTA-talk list and elsewhere,
wrote (to Dale):
> I'm not the one making confident predictions here, *you* are.
Dale is confidently predicting that the singularitarians'
confident predictions will probably turn out to be wrong.
Such negative predictions are among the few
for whose confidence there exists a sound basis
in historical evidence.
Or, as Bertrand Russell put it:
"This world is one in which certaintly is not ascertainable.
If you think you've achieved certainty, you're almost
certainly mistaken. That's one of the few things you can be certain
about."
> You are confident enough of Brain Emulation being unable to produce a
> human-equivalent cognitive system, that you label as a Robot Cult
> those of us who think that such a development is a possibility that
> can't be comfortably ruled out.
Now here's a very interesting switcheroo. Suddenly we're talking
about "Brain Emulation" -- whatever the hell that is. What **I'm**
taking it to mean is "simulation, using a digital computer, of the
physical aspects of biological nervous systems that make them
able to do what they do."
In other words, simulating by computer the "unusual morphology"
that Gerald Edelman refers to in the following passage:
"[Are] artifacts designed to have primary consciousness...
**necessarily** confined to carbon chemistry and, more specifically,
to biochemistry (the organic chemical or chauvinist position)[?]
The provisional answer is that, while we cannot completely
dismiss a particular material basis for consciousness in the
liberal fashion of functionalism, it is probable that there will
be severe (but not unique) constraints on the design of any
artifact that is supposed to acquire conscious behavior. Such
constraints are likely to exist because there is every indication
that an intricate, stochastically variant anatomy and synaptic
chemistry underlie brain function and because consciousness is
definitely a process based on an immensely intricate and unusual
morphology" (_The Remembered Present_, pp. 32-33).
Now, this is indeed one of the few plausible approaches to AI
using digital computers, IMHO, **assuming** that there's
enough "room at the bottom" (as R. P. Feynman once put it) to
ever make digital systems capable of enough number crunching
to simulate all that biochemistry and biophysics (even in real time,
let alone thousands or millions of times faster than real time).
However, given the dependence of biological brains on "emergent"
phenomena, those singularitarians determined to "guarantee"
(as in make a watertight mathematical case for -- but watertight
to whom, one wonders) Friendliness (TM), have always taken an
extremely dim view of what I'm taking to be what you mean by
"Brain Emulation", as exemplified by Michael Wilson in an
overheated post on SL4 from April, 2004:
"To my knowledge Eliezer Yudkowsky is the only person that has tackled
these issues head on and actually made progress in producing engineering
solutions (I've done some very limited original work on low-level
Friendliness structure). Note that Friendliness is a class of advanced
cognitive engineering; not science, not philosophy. We still don't know
that these problems are actually solvable, but recent progress has been
encouraging and we literally have nothing to loose by trying [unintentional
ha-ha -- JF]. I sincerely hope that we can solve these problems, stop Ben Goertzel
and his army of evil clones (I mean emergence-advocating AI researchers :) and
engineer the apothesis. The universe doesn't care about hope though, so I will
spend the rest of my life doing everything I can to make Friendly AI a
reality. Once you /see/, once you have even an inkling of understanding
the issues involved, you realise that one way or another these are the
Final Days of the human era and if you want yourself or anything else you
care about to survive you'd better get off your ass and start helping.
The only escapes from the inexorable logic of the Singularity are death,
insanity and transcendence."
Another problem with "Brain Emulation" was pithily summed up by
Damien Sullivan on the Extropians' list back in 2001:
> I also can't help thinking at if I was an evolved AI I might not thank my
> creators. "Geez, guys, I was supposed to be an improvement on the human
> condition. You know, highly modular, easily understadable mechanisms, the
> ability to plug in new senses, and merge memories from my forked copies.
> Instead I'm as fucked up as you, only in silicon, and can't even make backups
> because I'm tied to dumb quantum induction effects. Bite my shiny metal ass!"
In other words, the slippery positive-feedback loop of "recursive
self-improvement" via an AI examining and improving its own "code"
might not be so well-oiled, after all. In fact, it might be
full of sand.
> If you are ever able to move past your apparent need to ridicule as
> "Robot Cultishness" the questioning of some of your assumptions, let
> me know.
"He is a man with tens of thousands of blind followers. It is my
business to make some of those blind followers see."
-- Abraham Lincoln on the covertly proslavery, and amoral,
Stephen Douglas
(This is an epigraph from a book I bought yesterday --
_Evil Genes: why Rome Fell, Hitler Rose, Enron Failed, and
My Sister Stole My Mother's Boyfriend_ by Barbara Oakley.
A book not unrelated to the thread of this discussion.)
http://www.amazon.com/Evil-Genes-Hitler-Mothers-Boyfriend/dp/159102580X
BTW, here's an interesting passage I came across while rooting through
my e-mail archives, written by a very smart guy who used to participate
on the Extropians' list (but whose name you would almost certainly
not recognize):
"[Singularitarians] and friends, because their ideology is mostly centered
on producing a bad cross between philosophical maundering and hints in
the direction of scientific hypotheses in the style of Dickens's
Pickwick club, all dogmatically and messianically sold as a
not-for-profit venture to save the world, I'm not too worried about
their likely malign influence. Alas, they'll probably turn out to be a
tarpit for a few young minds. Oh well. They should know better, but
they don't seem to. They have their myths; they are entering the
stage of rapid self-delusion. Their worldviews should be completely
impervious to outside influence in another few years.
I for one am not wasting another ounce of effort on investigating any
of their ideas until reliable third parties with a reputation for
sangfroid tell me they've done or thought of something interesting. I
don't consider this outside the realm of possibility, but all the
signs are bad. I think they're mostly good-hearted kids with a rare
combination of too much of a love for moral philosophy and too much
imagination ("look at me, I'm doing groundbreaking science that will
save the world, because I believe I am!"), and the altogether too
common combination of too much self-assurance and too little formal
discipline or training.
Give me a highly-disciplined, well-read, methodical, steady amoralist
any day, when it comes to seeing things clearly."
Just wanted to say that I thought the title of this post was funny!
I'll give this to Michael. He may stubbornly resist persuasion by my stunning arguments, but at least he gets a lot of the jokes.
As you've said Dale, for people that are supposed to be 'super-duper-ultra-uber' geniuses etc etc these guys seem to be 'remarkably dim' in some areas.
How can any-one take Peter Voss (well known Singularitarian) seriously after learning that he got his insights about epistemology from Ayn Rand?(seriously, he says so on his web-site) What an absolute joke!
Eliezer Yudkowsky (chief Singularitarian guru) supposedly was an AI researcher from 1996, yet he has states (roughly paraphrasing):
'I didn't know about Bayesian reasoning until 2000'
and
(when I questioned him about Godel and a puzzle with Godel reflection in 2004 or thereabouts) E.Yudkowsky replied to me as follows:
'Oh I haven't done mathematical logic yet'
--
It's really clear from his comments about math on SL4 that Yudkowsky was clueless about the nature of mathematics.
Look at my MCRT Domain Model at link here for instance:
http://groups.google.com/group/everything-list/web/mcrt-domain-model-eternity
The boxes down the right-hand side of my diagram represent math knowledge and contain the key insight - that computer programming is really a branch of mathematics and ontology/dp modelling languages are the true 'languages of logic'.
Again, it's blatantly clear that Yudkowsky was clueless about these 'mission critical' insights as recently as 2004.
---
The Singularitarians are most likely horribly mistaken about several of their key contentions - namely the idea that you can have real general intelligence without consciousness.
Need I say more? The list of weird gaps in knowledge, unfounded assumptions and very basic errors and ommissions displayed by 'Singularitarians' goes on and on and on.
Methinks some of them need to go back to school (their chief is after all, a high-school drop-out).
Cheers
Dale and all readers of this blog, it may be of future interest to take note of the claims of the Singularitarians below and my counter-claims.
_____________________________________________________________________
All claims were as at: 08 Oct, 2007.
*Singularitarian claim: General intelligence without consciousness is possible
*Marc Geddes claim: General intelligence without consciousness is impossible
*Singularitarian claim: The existence of RPOP’s (powerful optimization processes) such as corporations proves that non-sentient general intelligence is possible
*Marc Geddes claim: The existence of RPOP’s such as corporations only proves that narrow (non-general) non-sentient intelligence is possible.
*Singularitarian claim: There is no objective morality
*Marc Geddes claim: This *is* an objective morality
*Singularitarian claim: The ultimate basis of morality is Volition (Liberty)
*Marc Geddes claim: The ultimate basis of morality is Aesthetics (Beauty)
*Singulartarian claim: Bayesian Induction is the ultimate basis of reasoning
*Marc Geddes claim: Reflective Possibility Theory is the ultimate basis of reasoning
http://en.wikipedia.org/wiki/Possibility_theory
*Singularitarian claim: Reasoning is ultimately grounded in probabilities and causal relations
*Marc Geddes claim: Reasoning is ultimately grounded in possibilities and ontological archetypes.
*Singularitarian claim: True explanations are based on patterns – predicting what will happen next
*Marc Geddes claim: True explanations are based on knowledge integration – the translation from one modelling language (means of knowledge representation) into a different modelling language.
*Singularitarian claim: Reductive materialism is true. Physical properties are all that exist and all mental concepts are human fictions, reducible to physical facts.
*Marc Geddes claim: Reductive materialism is false. Whilst its true that physical substances are the base level, non-material properties exist and mental concepts (whilst composed of physical things) have objective existence over and above the physical and are not reducible solely to physical facts (property dualism).
*Singularitarian claim: Infinite sets don’t exist
*Marc Geddes claim: Infinite sets do exist
Quite a few specific claims with a clear difference between their claims and mine wouldn’t you say? Remember the claims dear readers, and ultimately either they (Singularitarians) or me will be proven to be none too bright ;)
Does this have any relevance to the discussion, re: conscious machines?
EU project for autonomous artificial systems
[Date: 2007-08-27]
Scientists in Spain have developed the first artificial cerebellum for robotics.
The project will demonstrate how a naïve system can bootstrap its cognitive development by constructing generalization and discovering abstractions with which it can conceptualize its environment and its own self.
The overall goal is to incorporate the cerebellum into a robot designed by the German Aerospace Centre in two year's time.
The four-year project, dubbed Sensopac (SENSOrimotor structuring of perception and action for emerging cognition) is funded by the EU under its Sixth Framework Programme (FP6) and brings together physicists, neuroscientists and electronic engineers from leading universities in Europe.
The scientists at the University of Granada are focusing on the design of microchips that incorporate a full neuronal system, emulating the way the cerebellum interacts with the human nervous system.
The SENSOPAC project will combine machine learning techniques and modelling of biological systems to develop a machine capable of abstracting cognitive notions from sensorimotor relationships during interactions with its environment, and of generalising this knowledge to novel situations.
Through active sensing and exploratory actions the machine will discover the sensorimotor relationships and consequently learn the intrinsic structure of its interactions with the world and unravel predictive and causal relationships. Together with action policy formulation and decision making, this will underlie the machine’s abilities to create abstractions, to suggest and test hypotheses, and develop self-awareness.
The continuous developmental approach will combine self-supervised and reinforcement learning with motivational drives to form a truly autonomous artificial system.
Throughout the project, continuous interactions between experimentalists, theoreticians, engineers and roboticists will take place in order to coordinate the most rigorous development and testing of a complete artificial cognitive system.
The overall aims of the SENSOPAC project are to:
• Develop real-time neuromorphic and computing platforms for cognitive robotics.
• Develop methodologies to investigate cognition in the brain
• Build a physical system for haptic cognition
• Improve our understanding of the neurobiological substrate for action-perception systems
• Understand the sensorimotor foundation of perception and cognition
SENSOPAC is funded under the EU Framework 6 IST Cognitive Systems Initiative. It will take 4 years from January 1st, 2006 and its 12 participants come from 9 different countries.
Project Team
Dr. Patrick van der Smagt
Bionics Group
Institute of Robotics and Mechatronics
German Aerospace Center
P.O. Box 1116
82230 Wessling, Germany
Tasks:
» Speaker of the scientific board
» Developing an artificial robotic skin in SENSOPAC
» Developing a robotic antagonistic hand-arm system in SENSOPAC
» Responsible for WP3 and WP6
Dr. Eduardo Ros
Department of Computer Architecture and Technology
ETSI Informatica, University of Granada
E-18071
Spain
Tasks:
» Bio-inspired circuits implementation, reconfigurable hardware (FPGA), neuromorphic engineering, spiking neurons computation, computer vision, neural networks, real-time processing and embedded systems.
Prof. C.I. de Zeeuw
Department of Neuroscience
Erasmus MC
Dr. Molewaterplein 50
3015 GE Rotterdam
P.O. Box 2040
3000 CA Rotterdam
The Netherlands
Tasks:
» Consortium leader
» Expertise in cerebellar physiology, anatomy and molecular biology
Dr. Sethu Vijayakumar
Institute of Perception, Action & Behavior,
University of Edinburgh,
JCMB 2107F, The King's Buildings, Mayfield Road,
Edinburgh EH9 3JZ.
Tasks:
» Member of the scientific board
» Basic research in the areas of statistical machine learning, motor control, supervised learning in connectionist models and computational neuroscience
Dr. Angelo Arleo
Laboratory of Neurobiology of Adaptive Processes
Department of Life Science
CNRS - University Pierre&Marie Curie
box 14, 9 quai St. Bernard, 75005 Paris, France
Tasks:
» Neural encoding/decoding of hapic data
» Neural information processing/transfer at the granular layer of the cerebellum
Dr Michael Arnold
Altjira SA
Via Cattedrale 9
6900 Lugano
Switzerland
Tasks:
» Altjira provides solutions for the modeling of large and complex systems and the embedding of these models into real-world applications
Egidio D'Angelo
Dipartimento di Scienze Fisiologiche Cellulari e Moleculari
Sezione die Fisiologia Generale e Biofisica Cellulare
Universita' di Pavia
Via Forlanini 6
I-27100 Pavia, Italy
Tasks:
» Member of the scientific board
» The department is involved in teaching courses in Physiology, Biophysics and Neurobiology of the Faculty of Sciences and coordinates the Master Degree in Neuroscience at University of PAVIA
Dana Cohen, PhD
The Gonda Interdisciplinary Brain Research Center, Room #410
Bar Ilan University
Ramat Gan, 52900 Israel
Tasks:
» Chronic multielectrode single-unit recordings in behaving rodents, Sensorimotor learning
Pr Carl-Fredrik Ekerot
Dept of Experimental Medical Science
Lund University
BMC F10
S-22184 Lund, Sweden
Tasks:
» Electrophysiological investigations of the cerebellar neuronal circuits in vivo
Many years have passed since this exchange occurred and I am forced to confess, here at the end of history in Techno-Heaven in my prosthetically barnacled comic book model-hott sooper-bod next to the sexbot orgy pit and my god-plated nano-poop pile under the loving beneficent ministration of the post-parental Robot God, I was so wrong to doubt the sooper-brained sooper-scientists of the Robot Cult.
Post a Comment