Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Friday, October 05, 2007

Responding to A Few Objections...

Upgraded from Comments... Friend of blog "Utilitarian" posts some interesting objections (scroll down to the Comments on the last couple of posts for some really interesting exchanges among some Amorous Mundites):

6. You defer more considerations to future generations (or our future selves)


I recognize that future generations and our future selves will articulate the shape of technodevelopmental social struggles and concrete outcomes and so I talk about technoscientific change in a way that reflects this recognition and I highlight the limitations of models of technoscientific change that fail to reflect this recognition properly.

and [you] place much less weight on the argument that reducing existential risk should be our overwhelming ethical priority,

I stress the need to democratize deliberation about technoscientific change so that the distribution of technodevelopmental costs, risks, and benefits better reflects the expressed sense of the stakes of the actual diversity of the stakeholders to that technoscientific change. I do not object to democratic deliberation about risks (including existential risks) in the least.

I will say that I do object to the ways in which existential risk discourse has taken on what looks to me like the reactionary coloration of Terror and Security discourse in this era of neoliberal/neoconservative corporate-militarist distress, and I especially disapprove the move of would-be professional futurologists who seem to believe now that the mark of their seriousness is precisely the skewing of futurism into a preoccupation with hyperbolic risk and superlative tech of a kind that mobilizes authoritarian concentrations of police power and facilitates endless welfare for the already rich stealthed as "Defense."

while placing more value on solving immediate problems.

Our ongoing collaborative solution of contemporary problems peer-to-peer becomes the archive to which we will make indispensable recourse as we seek to address future problems. Foresight will mean different things and look quite different to those who advocate, as I do, peer-to-peer democracy rather than elite technocracy, as I think many would-be professional futurists do.

7. You place less credence in the feasibility of superintelligent AI within the next 25, 50, and 1000 years than I do,

Not to put too fine a point on it, I won't talk about feasibility in principle or timescale estimation for the technodevelopmental arrival of post-biological superintelligent entities until I am persuaded that their advocates know what the word "intelligent" means in the first place.

9. Discussion of possible advanced AI is a projection/transcendentalization/warped outgrowth of concerns about 'networked malware.' [This one just totally baffles me....]

My point is just to say that the closest Singularitarian discourse ever comes to touching ground in my view is when it touches on such issues of networked malware. Needless to say, one needn't join a Robot Cult to contribute to policy or commonsense in this area -- and indeed, I think it is fair to say one is more likely to so contribute if one doesn't join a Robot Cult. As you say, this sort of talk of networked malware doesn't get us to entitative superintelligent AI. You'll forgive me if I suggest that this is a merit and not a flaw of this sort of talk.

James Hughes has written and spoken about evolving computer viruses on the Internet, and expecting advanced AI to come about through such a process,

James is a close friend and respected colleague. But I don't think that this is a particularly compelling line of his as far as that goes, and I can't say that it seems to be anything like a preoccupation of his either as far as I can tell.

which seems to be tremendously less plausible than building an AI intentionally (including through the use of evolutionary algorithms or brain emulation).

I think these scenarios are both sufficiently close to zero probability that squabbles about their relative plausibility is better left to angels-on-pinheads pinheads, to be perfectly frank about it. As I have often stated, the only "superintelligence" that interests me particularly involves network-mediated practices of collaborative problem solving and creative expressivity in actually-existing humans, peer-to-peer. I leave the "serious" business of calculating the Robot God Odds to others.

Alternatively, it seems absurd to think that fears about computer viruses and about arbitrary utility-maximizing intelligences are related, even psychologically (fears about computer viruses are not fears about agents).

Well, you know, I don't agree that it is absurd to connect these fears in the least -- as even a cursory summary of the tropes of bad made-for-television science fiction will surely attest. More to the point, all fears and fantasies of technodevelopment are connected in my views to fears and fantasies about agency (the discursive poles of which are impotence and omnipotence). I think it very likely that many who would calculate the Robot God Odds in the here and now are indulging in fact in a surrogate (and often rather traumatized) meditation on the relative technodevelopmental empowerment and abjection they are subject to, immersed, as are we all, in the deranging storm-churn of ongoing planetary technoscientific change.

9 comments:

Anonymous said...

"I recognize that future generations and our future selves will articulate the shape of technodevelopmental social struggles"
But building a corpus of research to inform them may be most beneficial if undertaken early, as with with investing in capital stocks, technological research, and environmental preservations. Further, building a body of knowledge or interested citizens to engage in democratic debate has to precede that debate.

"I will say that I do object to the ways in which existential risk discourse has taken on what looks to me like the reactionary coloration of Terror and Security discourse in this era of neoliberal/neoconservative corporate-militarist distress,"
It seems to me that the biggest problem with security discourse is its dishonest application, e.g. the Cheney "1% doctrine" of equating certain and improbable risks.

Allocation of 'Homeland Security' funds to political pork is primarily a problem of corruption. Concern over climate change has created new opportunities for corruption in the allocation of carbon quotas (to existing firms rather than through auction), in subsidies for Iowa ethanol while tariffs restrict the Brazilian variety, etc. But these problems do not mean that we should not be concerned with global warming, particularly the possibility of catastrophic positive feedback worse than the IPCC consensus predicts.

"Not to put too fine a point on it, I won't talk about feasibility in principle or timescale estimation for the technodevelopmental arrival of post-biological superintelligent entities"
Would it be fair to say that you're operating on the rule-of-thumb not to consider this until either a) broad populations are concerned with it or b) a majority or major group of relevant concerned scientists indicates that it should be brought to public attention?


"My point is just to say that the closest Singularitarian discourse ever comes to touching ground in my view is when it touches on such issues of networked malware."
Thanks for the clarifications wrt to this, Hughes, etc.

jimf said...

"Utilitarian" wrote:

> [Dale wrote:]
>
> > I recognize that future generations and our future selves
> > will articulate the shape of technodevelopmental social struggles
>
> But building a corpus of research to inform them may be most
> beneficial if undertaken early

But the earlier it's undertaken the sillier and more irrelevant
it will seem when the time comes.

Have you ever read Herman Kahn's _The Year 2000: A Framework for
Speculation on the Next Thirty-Three Years_ (1968).

It seems pretty fuzzy, and occasionally pretty funny, now that 2000
has come and gone.

Likewise with something like Arthur C. Clarke's _Profiles of the Future_.

Outright **fiction** is usually a more useful means of
trying out scenarios about the future than "serious" research, and
gets a much wider audience if successful as entertainment.
Sometimes such works turn out to be remarkably prescient
(though that can only be known, of course,
in retrospect). E. M. Forster's "The Machine Stops", for example,
is an uncannily accurate story about a networked world, from
a century ago, and he wasn't even an SF writer like H. G. Wells.

Apropos machine intelligence, some of the ideas touted as high
"shock level" among the singularitarians occurred in Samuel Butler's
satirical _Erewhon_ in 1872.

"Erewhon revealed Butler's long interest in Darwin's theories of
biological evolution, and in fact Darwin had, like him, visited New Zealand.
In 1863, four years after Darwin published On the Origin of Species,
Butler published a letter to the editor of a New Zealand newspaper
captioned Darwin Among The Machines, in which Butler compared human
evolution to machine evolution, and prophesized (half in jest) that
machines would eventually replace man in the supremacy of the earth:
"In the course of ages we shall find ourselves the inferior race."
The letter raises many of the themes now being debated by proponents
of the Technological Singularity, namely, that computers are evolving
much faster than biological humans and that we are racing toward
an unknowable future with explosive technological change."
http://en.wikipedia.org/wiki/Samuel_Butler_(1835-1902)

I think the 1969 _Colossus: The Forbin Project_ is a **terrific**
movie, irrespective of the plausibility of the AI portrayed therein
(studded with blue-and-charcoal console panels from IBM 1620s
and with a big Times Square marquee responding OLD PROGRAM NAME
when somebody submits an inquiry to the computer [somebody'd had
some exposure to the Dartmouth Timesharing System] ;-> ).
http://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project

> Dale wrote:
>
> > I won't talk about feasibility in principle or timescale estimation
> > for the technodevelopmental arrival of post-biological superintelligent
> > entities. . . [until we have a clue what "intelligent" means].
>
> Would it be fair to say that you're operating on the rule-of-thumb not
> to consider this until either a) broad populations are concerned with it
> or b) a majority or major group of relevant concerned scientists indicates
> that it should be brought to public attention?

Well, the latter, in my view.

"Utilitarian" wrote, earlier:

> One thing I would like to see (and would pay for) is a professional
> elicitation of opinion from the AI community, like the expert elicitations
> on global warming conducted for the IPCC

Well, the "RAND Corporation is the original non-profit think tank helping
to improve policy and decision making through objective research and analysis."
http://www.rand.org/

If you've got seven figures' worth of cash (I'd guess), they might be willing
to write a report for you.

They were doing it 40 years ago: "[T]he year following
the publication of my first investigation of work in
artificial intelligence, the RAND Corporation held a
meeting of experts in computer science to discuss, among
other topics, my report."
-- Hubert L. Dreyfus, _What Computers Still Can't Do:
A Critique of Artifical Reason_, MIT Press, 1992,
Introduction, pp. 86-87

I'd make sure your elicitation of opinion contained input
from experts other than those self-identified as belonging to
the "AI community", though.

I'd like to see plenty of input from the neuroscience community.

From my archives:

---------------------------------------
Subject: Real-world AI research

Q: Why does this kind of stuff never (or almost never) get
mentioned in the on-line "transhumanist" community?

A: Because it isn't the symbolic, language-based shortcut
to AI that the GOFAI community has been longing for (and pretending
it was on the verge of creating) since the 50's. And the
transhumanists, for all their future-shock-level posturing, seem
to be stuck in the 50's, in the age of ray guns 'n' rocket ships
sci-fi. The heyday of L. Ron Hubbard and Ayn Rand.
That's my answer, anyway.

A research article based on the latest of the
Darwin/NOMAD series of robots at the
Neurosciences Institute in La Jolla:

http://www.idiap.ch/~rchava/sab06wk/talks.html
--------------------------------------------------------------
Jason G. Fleischer

Integration and loops: fusing multiple sensory inputs
and coding behavioral context in a hippocampal model
Successfully learning a route through an environment
requires a memory for the sequence of sensory inputs
along the path. This integration of multisensory information
("what") over time ("when") and space ("where") is
referred to as episodic memory [5]. Navigation, in this
respect, can be argued to be a special case of a more
general episodic memory process.

In humans, the medial temporal lobe, including the
hippocampus, is necessary for the acquisition of episodic
memories [13]. Episodic memory type responses have
also been shown in rodent hippocampus, where cells
display place-correlated firing patterns irrespective
of context [11], and also context-sensitive place-correlated
firing depending on where the animal has been or
where it is going [4]. Hippocampal place cells seem to
have firing fields that can be related to visual cues,
olfactory cues, tactile cues, auditory cues, or
combinations of these [10]. Additionally, these cells
maintain their place-correlated firing patterns even
when the animals are deprived of one of their
senses [12], thus demonstrating that episodic memory
is associative and capable of pattern completion even
when some input is missing.

It is likely that the unique anatomy and connectivity
of the hippocampal region and its surrounding areas
are critical both for spatial navigation and for
the formation of episodic memories. Highly processed
neocortical information from all modalities converges
onto the medial temporal lobe. After several levels
of further processing within the medial temporal
lobe, and specifically the hippocampus, information
diverges in broad projections back to the neocortex [9, 15].
Within the hippocampus itself, there are several levels
of looping over different timescales [1, 2, 14, 16].
A possible function of this unique anatomy is that
the convergence of sensory projections on this area
provide the multisensory information needed to form
reliable episodic memory, and that the looping of
information and relatively sparse connectivity within
the hippocampus allow it integrate sensory input over
time [8].

This theory can be investigated using the Brain-Based
Device (BBD) methodology [3, 6]. BBDs can be considered a
class of neurally-controlled robots, whose behavior is driven
by a neuronal simulation based on features of vertebrate
neuroanatomy and neurophysiology, emphasizing the organism's
interaction with the environment, and strictly constrained
by the following design principles:

1. The device needs to be situated in a physical environment.

2. The device needs to engage in a behavioral task.

3. The device's behavior must be controlled by a simulated
nervous system having a design that reflects the vertebrate
brain's architecture and dynamics.

4. The device must possess a value system that signals the
salience of the environmental cues and that modulates
plasticity in the simulated nervous system, resulting
in modification of behavior.

5. The behavior of the device and the activity of its simulated
nervous system must allow comparisons with empirical data.

Because of these constraints, BBD simulations tend to require
large-scale networks of neuronal elements, high performance
computing to run the network in real time, and the engineering
of specialized physical devices to embody the network. The
power of this approach is that it allows for the simultaneous
recording the state of all components of its simulated nervous
system at all levels during a behavioral task in the real world.

Previously, we have presented results from Darwin X [8, 7], a
BBD model of selected sensory and motor cortical areas and
the hippocampal formation that performs a task similar to the
Morris water maze. In that work we investigated the formation
of place cell activity and the pathways that created such activity.

The new model, Darwin XI, extends the previous model, and now
includes a total of five sensory modalities. Darwin XI includes
the three modalities used in Darwin X, what (inferotemporal)
and where (parietal) visual pathways and head direction
(anterior thalamic nucleus). Darwin XI also adds two new
modalities, whisker texture (SII) , and a pseudo-cortical
area containing population coded location in the environment
obtained through a laser rangefinder. Each sensory input
creates organized activity in a neuronal area that is
analogous to a neocortical area in the vertebrate brain.
Inputs from all these sensory areas converge sparsely on
entorhinal cortex, which in turn projects (via the perforant
path) to dentate gyrus, and also to CA3 and CA1. The model
also contains the trisynaptic loop, in which dentate gyrus
projects to CA3, which in turn projects to CA1, and then
back to entorhinal cortex. CA1 makes value-dependent synapses
on a motor cortical area, whose activity is used as a basis
for choosing motor actions. Darwin XI operates in a + maze
environment, performing a task similar to the one in [4],
which includes both learning of the reward structure
of the environment and reversal of behavior when the
reward structure changes.

This talk will focus on the multisensory aspect of the
model, extending both the number of sensory modalities,
and investigating the changes in both behavior and neuronal
activity resulting from lesioning various sensory pathways.
It will also present data on the emergence of neuronal
activity in the model that is correlated with behavioral
context, i.e. retrospective coding.

References

[1] D.G. Amaral, N. Ishizuka, and B. Claiborne. Neurons,
numbers, and the hippocampal network.
Progress in Brain Research, 83(1­11), 1990.

[2] C. Bernard and H.V. Wheal. Model of local connectivity
patterns in CA3 and CA1 areas of the hippocampus.
Hippocampus, 4(497­529), 1994.

[3] G.M. Edelman, G.N. Reeke Jr., W.E. Gall, G. Tononi,
D. Williams, and O. Sporns. Synthetic Neural Modeling Applied
to a Real-World Artifact.
PNAS, 89(15):7267­7271, 1992.

[4] J. Ferbinteanu and M. L. Shapiro. Prospective and
retrospective memory coding in the hippocampus.
Neuron, 40(6):1227­1239, 2003.

[5] D. Griffiths, A. Dickenson, and N. Clayton. Episodic
memory: what can animals remember about their past?
Trends in Cognitive Science, 3(2):74­80, 1999.

[6] J.L. Krichmar and G.M. Edelman. Brain-based devices
for the study of nervous systems and the development
of intelligent machines.
Artificial Life, 11(1-2):63­78, January 2005.

[7] J.L. Krichmar, D.A. Nitz, J.A. Gally, and G.M. Edelman.
Characterizing functional hippocampal pathways in a
brain-based device as it solves a spatial memory task.
PNAS, 102(6):2111­ 2116, 2005.

[8] J.L. Krichmar, A.K. Seth, D.A. Nitz, J.G. Fleischer,
and G.M. Edelman. Spatial navigation and causal analysis
in a brain-based device modeling cortical-hippocampal
interactions.
Neuroinformatics, 3(3):197­221, 2005.

[9] P. Lavenex and D.G. Amaral. Hippocampal-neocortical
interaction: A heirarchy of associativity.
Hippocampus, 10(4):420­430, 2000.

[10] J. O'Keefe and D. H. Conway. Hippocampal place units
in the freely moving rat: Why they fire where they fire.
Experimental Brain Research, 31:573­590, 1978.

[11] J. O'Keefe and J. Dostrovsky. The hippocampus as a
spatial map. preliminary evidence from unit activity
in the freely-moving rat.
Brain Research, 34(1):171­175, 1971.

[12] G.J. Quirk, R.U. Muller, and J.L. Kubie. The firing
of hippocampal place cells in the dark depends on the
rat's recent experience.
J. Neurosci., 10(6):2008­2017, 1990.

[13] W. B. Scoville and B. Milner. Loss of recent memory
after bilateral hippocampal lesions.
J. Neurochem, 20(11-12), 1957.

[14] A. Treves and E.T. Rolls. Computational analysis of
the role of the hippocampus in memory.
Hippocampus, 4:374­391, 1994.

[15] M.P. Witter, P.A. Naber, T. van Haeften, W.C. Machielsen,
S.A. Rombouts, F. Barkhof, P. Scheltens, and F.H. Lopes da Silva.
Cortico-hippocampal communication by way of parallel
parahippocampal-subicular pathways.
Hippocampus, 10(4):398­410, 2000.

[16] M.P. Witter, F.G. Wouterlood, P.A. Naber, and
T. Van Haeften. Anatomical Organization of the
Parahippocampal-Hippocampal Network.
Ann NY Acad Sci, 911(1):1­24, 2000.
--------------------------------------------------------------
http://vesicle.nsi.edu/users/fleischer/
http://www.nsi.edu/nomad/

Dale Carrico said...

Would it be fair to say that you're operating on the rule-of-thumb not to consider this until either a) broad populations are concerned with it or b) a majority or major group of relevant concerned scientists indicates that it should be brought to public attention?

I would take the prospects of entitative post-biological superintelligence more seriously under scenario (b), or at any rate I would likely be moved to re-examine my thinking on these questions.

(Although I must warn you that a bunch of self-appointed "AI-experts" in Robot Cult Reynolds Wrap Priestly robes isn't likely to pass muster as "relevant concerned scientists" on my construal so don't even try that shit! :)).

But I think the broad implication of your formulation here is wrong.

There are plenty of things I take seriously that neither majorities in general nor scientific consensus have found their way to as yet -- my whole bit about the consensualization of non-normative modification healthcare may be programmatically mainstreamable (this is one of its merits in my view), but it isn't exactly mainstream here and now. Hell, I get geek misty at the prospects of the space elevator -- and I daresay that doesn't satisfy your (a) or (b).

I honestly think Singularitarian discourse is conceptually incoherent and profoundly limited, I think the sub(cult)ures shaped by the discourse are often distressingly irrational (in more ways than one), and I think the frames arising out of the discourse have pernicious effects as they are disseminated into more popular discussions (in more ways than one).

My thinking about the Singularitarians wasn't shaped by the Rule of Thumb you mention. Sometimes considerations relevant to that Rule will find their way into the rhetoric I use to dissuade others from getting sucked into Singularitarian silliness, however.

Anonymous said...

"(Although I must warn you that a bunch of self-appointed "AI-experts" in Robot Cult Reynolds Wrap Priestly robes isn't likely to pass muster as "relevant concerned scientists" on my construal so don't even try that shit! :))."

I would say that researchers at top-20 university computer science departments and the best corporate labs would be relevant. But I think that the narrower class of self-identified 'AI experts' within elite computer science, e.g. the Stanford and MIT AI labs, offers important and distinct data points. No need to warn me about taking a biased sub-sample of people wearing aluminum foil, although I appreciate the smileys.


"I honestly think Singularitarian discourse is conceptually incoherent"
I haven't seen any justification for this yet, and conceptual incoherence is a very strong claim. Perhaps this stems from the view that there is no good Singularitarian definition of 'superintelligence' given the interactions betweens networked humans, their tools (e.g. Google), and their institutions? Some time could be spent quibbling, but the potential impact of digital copying of minds or of much easier modification of those minds would remain vast, conditional on their feasibility.

"My thinking about the Singularitarians wasn't shaped by the Rule of Thumb you mention. Sometimes considerations relevant to that Rule will find their way into the rhetoric I use to dissuade others from getting sucked into Singularitarian silliness, however."
Thanks.

James wrote:
"Both Dale and I **started out** in sympathetic engagement
(more than just "sympathetic", in my case) with the discourse
you now promulgate. We stumbled upon reasons to reconsider
our uncritical acceptance of that discourse."

Could you describe those reasons?


"Have you ever read Herman Kahn's _The Year 2000: A Framework for
Speculation on the Next Thirty-Three Years_ (1968)."
I have not, although I have read Dreyfus, and am familiar with the RAND work.

Fusion power has a few decades away for 5 or 6 decades now, but there have nevertheless been substantial improvements in the efficiency of fusion reactors over that time. We have in fact made advances and developed tools that put us closer to fusion power than we were then, in fact, even though estimates of the distance to be covered were mistaken.

More computing power, the development of new mathematics, new programming techniques, increased knowledge of brain function, the education of more computer scientists worldwide, and the economic pressures from a vibrant technology industry (and a nascent robotics industry) at least partially offset the experience of failure (although we should infer the probable existence of unknown future roadblocks in AI development).

"If you've got seven figures' worth of cash (I'd guess), they might be willing
to write a report for you."
The IPCC method is a type of structured polling and interviewing, and would be much less expensive.

Anonymous said...

James wrote:
"Both Dale and I **started out** in sympathetic engagement
(more than just "sympathetic", in my case) with the discourse
you now promulgate. We stumbled upon reasons to reconsider
our uncritical acceptance of that discourse."

Could you describe those reasons?
The chronology, psychology, etc.

jimf said...

"Utilitarian" wrote:

> [I wrote:]
>
> > We stumbled upon reasons to reconsider
> > our uncritical acceptance of that discourse.
>
> Could you describe those reasons?
> The chronology, psychology, etc.

As I mentioned a few days ago in the comment thread on
http://amormundi.blogspot.com/2007/09/still-more-on-superlativity.html :

FWIW, here's a link to an Orkut community that I created
several years ago to summarize my own disillusionment with the
whole Singularity mishegas. Nobody ever posted there but
me (talk about vanity publishing! ;-> ), but it remains useful
for just this purpose (i.e., giving people access to my views,
however unpopular), and at the same time Orkut shields it
from being generally Googlable, which probably cuts down on
the hate-mail I'd otherwise get.

I think you have to join Orkut to access it, but I believe
anybody can do that now, without a special invitation.

"Unbound Singularity"
http://www.orkut.com/Community.aspx?cmm=38810

The chronology and (some, at least) of the psychology are in
"Long-winded justification for this community"
http://www.orkut.com/CommMsgs.aspx?cmm=38810&tid=2

(If you want more, contact me privately:
jfehlnger AT comcast DOT net).

Anonymous said...

I reviewed the posts on Orkut (there was one post from Jacques, 'good work' or something to that effect). There was a lot more C.S. Lewis and J.R.R. Tolkien than I expected. :)

When you say that millions of people will be involved in creating a Singularity, and that no group or individual could control the development of artificial intelligence, I don't think you present sufficient evidence. Do we really know enough to say that as scientific knowledge, software, and hardware improve individuals like Bill Gates, Brin and Page, or Jim Simons will never have the potential to jump ahead with private or corporate projects? That no Manhattan Project (which had 15 Nobel winners, many other scientists, and vast funding) endeavour, governmental or not, will be launched successfully in any country in the world?

jimf said...

"Utilitarian" wrote:

> There was a lot more C.S. Lewis and J.R.R. Tolkien than I expected.

Tolkien is a personal fetish. Though the subtext in his mythology
about the snare of the promise of immortality has an interesting
resonance in the light of >Hist discourse.

Lewis, OTOH, is amusing to read after having had a brush with
the >Hists. I rather disliked _That Hideous Strength_ when I first
read it many years ago, but now I find it hilarious.

---------------------------
"The N.I.C.E. marks the beginning of a new era -- the **really**
scientific era. Up to now, everything has been haphazard.
This is going to put science itself on a scientific basis.
There are to be forty interlocking committees sitting every
day and they've got a wonderful gadget -- I was shown the model
last time I was in town -- by which the findings of each committee
print themselves off in their own little compartment on the
Analytical Notice-Board every half hour. Then, that report
slides itself into the right position where it's connected up
by little arrows with all the relevant parts of the other reports.
A glance at the Board shows you the policy of the whole Institute
actually taking shape under your own eyes. There'll be a
staff of at least twenty experts at the top of the building
working this Notice-Board in a room rather like the Tube control
rooms. It's a marvellous gadget. The different kinds of
business all come out in the Board in different coloured
lights. It must have cost half a million. They call it a
Pragmatometer."

"And there," said Busby, "you see again what the Institute
is already doing for the country. Pragmatometry is going
to be a big thing. Hundreds of people are going in for it.
Why this Analytical Notice-Board will probably be out of
date before the building is finished!"
---------------------------

> When you say that millions of people will be involved in creating
> a Singularity, and that no group or individual could control the
> development of artificial intelligence. . .

I said that no one group will be in charge of creating or
steering a **singularity** (if such a thing takes place).
It will be a multi-axial event. Have you ever seen the
"Connections" public TV show narrated by James Burke?
Technology is like that -- complexly interacting, inter-implicated,
non-linear.

Computers have turned out to be vastly important, but
hardly in the way folks imagined back in the 60's when
huge, enormously expensive mainframes were the norm. What did
the SF writers predict? HAL and his brethren. Huge,
enormously expensive **talking** mainframes.
What do we have? People carrying around computers without
thinking of them as "computers" -- the cell phone, the iPod,
the digital camera. At home, the HDTV, the DVD player, the CD player.
Plus one or two or a dozen boxes explicitly thought of
as "computers". Still dumb, but indispensable.
How many microprocessors lurking in this,
that, or the other gadget? God knows.

Yes, I know that SIAI promotes the notion of the Big Brother
Superintelligence that will protect the world from harm,
and that this notion has been latched onto by the
Existential Risk worriers[*], but others, even others in
the >Hist community, dismiss that notion with scorn.
Perry Metzger, one of the founders of the Extropians' mailing
list, has said that post-biological intelligence will
lead to a "Cambrian explosion" of diversity, and this
seems more likely to me than the monolithic fantasies
of the "singularitarians", so-called.

Eugen Leitl frequently dismisses the assumption that AI, if it happens,
will be monolithic. There won't be just one, there'll
be thousands of them. Millions. Kurzweil assumes the same
thing, AFAIR.

[*] When **did** these people gain such ascendancy, anyway?
And why, exactly? Was it Bill Joy's angst back in (early)
2001 (was it?) that was the trigger? It happened before 9/11, certainly.
And I **think** even before the publication of Hugo de Garis'
_The Artilect War_. And which came first, Grey Goo or Malevolent
AI? Drexler himself came up with the notion of the former,
didn't he? (I have to admit I've paid much less attention to the
nanotech, and the cryonics/immortalist for that matter, stuff
than I have to AI).

> Do we really know enough to say that as scientific knowledge, software,
> and hardware improve individuals like Bill Gates, Brin and Page, or
> Jim Simons will never have the potential to jump ahead with private or
> corporate projects?

I really think you "unbellyfeel" (to use Orwellian idiom) the fact
that there is no clear path between today's technology and AI.
If it happens, it will take discontinuities -- paradigm shifts --
in technology that would be as startling to us as a Pentium would
be to Benjamin Franklin. The world will be an incalculably
different place then (even if "then" is just 100 years from now) --
with 3D optical nano-scale self-reconfiguring
signal-processing devices, or what have you. However they're
built or wound up or **grown** up, they won't be programmed
in C or Visual Basic. They probably won't bear much resemblance
to digital computers as we know and love them today (though
digital computers, in an even more refined form than we now
have them, will probably be even more ubiquitous then [and
fanatical audiophiles will probably still be listening to
vacuum-tube stereos ;->]).

We can hardly even begin to guess what those technolgies will
be like, any more than the scientists and engineers of the Manhattan
Project, in those days of vacuum-tube electronics, could have
begun to guess what it would take to manufacture something
like a Pentium chip.

And as far as the sheer order-of-magnitude scales of physical
complexity separating a biological brain (**any** biological brain)
from a the most sophisticated modern-day computer -- I'd suggest
you read something like Edelman's _Bright Air, Brilliant Fire_
to get an inkling of that.

jimf said...

> What did the SF writers predict? HAL and his brethren. Huge,
> enormously expensive **talking** mainframes.
> What do we have? People carrying around computers without
> thinking of them as "computers"

Although, you know, the original _Star Trek_ had an inkling
of this without knowing exactly why or what it was doing
(apart from satisfying the exigencies of plots and scripts).

The ship's computer, or "library computer" (usually accessed
via Mr. Spock's station on the bridge) was the usual
talking mainframe. But the ubiquitous "tricorder" was
really a portable computer that wasn't called a computer.

In the original series' Writers/Directors Guide (1967)
http://www.chekovsite.com/fanfiction/writersguide.html --
written by Roddenberry himself I believe --
it is in fact even called a "computer":

p. 19:

IMPORTANT EQUIPMENT AND TERMINOLOGY

TRICORDER
A portable sensor-computer-recorder, about the size of a
large rectangular handbag, carried by an over-shoulder
strap. A remarkable miniaturized device, it can be used
to analyze and keep records of almost any type of data
on planet surfaces, plus sensing or identifying various
objects. It can also give the age of an artifact, the
composition of alien life and so on. The tricorder can
be carried by Uhura (as Communications Officer she often
maintains records of what is going on), by the female
yeoman[*] in a story, or by Mr. Spock, of course, as a
portable scientific tool. It can also be identified
as a “medical tricorder" and carried by Dr. McCoy.

http://www.racprops.com/issue5/classictricorder/

[*] Oh those female yeomen! Miniskirts and Big Hair In Space.