"These days," writes Dvorsky, "people worry about robots stealing our jobs. But maybe we should be more concerned about massive populations of computerized human brains." Yes, declares the futurist, maybe we should worry less about real things -- like the displacement of jobs in the midst of an unemployment crisis caused by outsourcing and automation in the absence of collective bargaining -- and more about unreal things like digital avatar armies attacking us from cyberspace. Thank heavens we have futurologists to keep our eyes on the ball!
"Called 'ems,'" Dvorsky declares -- that's what the "experts" in the non-field focused on these non-things call "them" is it, George, "ems"? -- "these infinitely-reproducible brains could change the world." Well, duh. That's what bleeding edge futurology always does -- It. Changes. Everything. Everybody knows that. That's why Segways "changed the way we think of cities." That's why encryption shattered the nation-state. That's why buckytubes gave us desktop nano-anything machines.
"To learn more about this prospect, I spoke to economist and futurist Robin Hanson." Dvorsky does not add, as he usually does not when he plays this little game, that Robin Hanson is a long-time contributor to the list-serves and conferences and the rest of the sub(cult)ural life of the transhumanoid sects of the Robot Cult that George Dvorsky is also a member of and uses io9 to proselytize for. Neither does Dvorsky mention that "Oxford University's Future of Humanity Institute" for which Robin Hanson is a "Research Associate" was founded by Nick Bostrom who was also one of the founders of the World Transhumanist Association and then the stealth transhumanoid outfit the Institute for Ethics and Emerging Technologies and that, the suave respectability of the Oxford moniker aside, the Future of Humanity Institute is thronged with Robot Cultists, transhumanism's first web celebrity Anders Sandberg, serially wrong nano-cornucopiast Eric Drexler, expert in extra-terrestrial biology (it helps that there isn't any we know of yet) and existential-risk management (why we should worry about robocalypse and nano-goo more than real problems) Milan Cirkovic and other transhumanoid eminences grises. (I ask yet again, io9, were Dvorsky a Scientologist flogging the so-called "independent credentials" of fellow Scientologists would you think that is okay without a disclosure of the real relationship involved?) Anyhow, Hanson is writing a book about "whole brain emulations -- or what he simply refers to as 'ems.'" ...Oh, there we have it. One of his fellow-faithful transhumanoids is calling these non-things "ems" in the futurological hairball he is coughing up for Dvorsky to promote in his io9 column. Very nice.
In any case, Dvorsky, channeling Hanson, helpfully explains that "A brain emulation can be thought of as a type of brain upload." That is to say "ems" are non-things that are a subset of other non-things futurologists talk about instead of talking about real things that matter. More to the point, "uploads" are a preoccupation of some techno-immortalist sects of the Robot Cult who have made the mistake of pretending that a picture of them would be the same thing as them if the picture were a "sufficiently detailed scan" which it obviously would not be (terminological hanky-panky over that weaselly "sufficiently" notwithstanding). Why such a scan would not only be them but be an immortal version of them when no picture ever has been -- not to mention that no computer ever has been and no software ever has been -- is anybody's guess, but my own guess if I had to guess would be that it has a lot to do with futurologists really seriously being scared of dying who prefer pseudo-scientific reassurances on that score rather than more conventionally religious versions already on offer. Hanson's "ems" don't seem to promise to upload people into eternal cyberangel avatars in Holodeck Heaven, however, but only to create super slavebots and sexy sexbots when they get stuffed into robot bodies of The Future.
Hanson admits his "em" idea comes on the heels of decade after decade after decade of cocksure pronouncements by "researchers" like him that they were on the verge of creating artificial intelligence even though they were always completely wrong about that and, indeed, after all this time look like they haven't progressed much toward this goal since they started out. In a surprise move that is a surprise to no one who grasps the essential identity of futurology with con-artistry, Hanson has decided to take the lemon of an AI-discourse characterized equally by ignorance, megalomania, and failure and make some lemonade. Maybe the intoxication of so many sociopathically logo-assertive techbros among the foremost AI-cheerleaders with disembodied, a-historical, computational fantasies of intelligence had something to do with the problem as well, but don't mind me, I'm no "expert." Sure, maybe AI has always failed because nobody really ever understood the phenomenon of "intelligence" the AI engineers were trying to mechanically reproduce, but why take a pause to better understand what you have so long ignored to the ruin of your project, why not just take it in stride? Who needs to understand stuff? Just take a really good picture, let the black box stay black -- try not to contemplate that the force of the "whole" in that evocative phrase "whole brain" actually requires understanding of the whole in question -- and plug that puppy into a big-boobied mannequin or Mars rover or whatever, and, hell, we're off to the races! Or at any rate, we're grinding out more pop-tech pulp for the credulous futurologist's shelf (or, more likely, another hour's eye-strain on some techbro's kindle).
Quoting Hanson:
"If the scan and cell models are good enough [the very question for those with questions you know, dispensed with at the outset --d], the whole model must [must! presto! problems solved! --d] have the same input-output behavior as the original brain... So if you add artificial eyes, ears, hands, and so on [don't think too much about whether we have any of these add-ons, and you really do have to love that futurological "and so on" --d], it could talk with you and do tasks as well as the original. It could also do as well at arguing that it's conscious and deserves moral consideration [and so those whom we presently regard as conscious or worthy of moral consideration have merely fooled us by arguing well? watch yourselves, people, around this Robin Hanson fellow! --d]... Ems would remake the world [no self-respecting futurologist can refrain from at least one declaration that total earth-shattering history-ending transformation is implied by their stunning insights --d]... We humans are made [by whom? of course, if we are already "made" then the making of our like has already been rhetorically opened for business --d] of meat, our brains run [is that what brains do, "run" --d] at the same speed, we take decades to build, and we must be trained [training is building, then, is it? --d] individually... Because ems are easily copied [because that wildly implausible ease was stipulated at the outset in order to have a reason to read the article at all --d], you could train one to be a good lawyer and then make a billion copies who are all good lawyers... That one initial em could come from the very best suited human [note the assumptions embedded in this formulation: the satisfaction of abstract and hence (supposedly) copyable criteria yields the "best lawyer" as if such considerations of worth aren't really usually the result of a host of contingencies of circumstance, appearance, interpersonal chemistry when it is indeed "humans" making these decisions in the scrum of human events --d]; the typical em would be as sharp and capable as the very best humans [note that the "sharpness" and "capability" of tools denotes "best"-ness in humans once the instrumentalized circumscription of imagination required by the whole thought-experiment is made, a result with real effects in the world, even if the appearance in the world of "ems" isn't among those real effects, even if the treatment of humans as something more like "ems" is the only tendency --d] in the world. The em economy would thus be much more competitive because small efficiency gains would lead to a bigger displacements [about the market fundamentalist faith expressed in this logic I will say a tad more at the very end --d] of behavior."Heavens, what a lot of ifs! Now, complete failure and ignorance may seem a shaky foundation on which to erect so many confident assertions, but like last season's tragic AI-fashionista futurists handwaving about Big Data that would "scale" into the AI of artificial intelligence without anybody needing to understand the "ai" of actual intelligence Hanson's "ems" as snapshot black boxes that would "plug" into the AI of artificial intelligence without anybody needing to understand the "ai" of actual intelligence re-enacts the same desperate dead-ender gambit. Hey, techno-transcendent faith-based initiatives are a hell of a drug.
Just because it illustrates another point I often make about sub(cult)ural futurism, it is worth noting that Hanson isn't just an AI-deadender but a libertopian deadender, too. "[A] more competitive em economy will select more strongly against jurisdictions whose regulations create competitive disadvantages... For example, if most human-dominated jurisdictions are slow and cautious regarding the first ems, the em economy would blossom in the few places that allow quick adoption of em-friendly practices. Such places would soon dominate the world." It's been a while since I have seen somebody propose what James Boyle used to deride as the libertarian gotcha so baldly -- the argument that if a thing would be profitable were it to exist, then it must not only be possible but it must sweep the whole world irresistibly, all you pathetic nanny state luddite scum to the contrary notwithstanding! Obviously this is true, and it is the reason we all took a maglev-ramp jet this evening to our vacation homes in some L5 torus where we feasted on low-calorie super-nutritious cruelty-free cell-culture steaks wearing diamonds made dirt-cheap from dirt in our drextech desktops while gazing onto a planet without nation-states because of crypto-anarchists and without pollution because of highly profitable mega-industrial geo-engineering projects built by beneficent petro-chemical CEOs. It's all so obvious! Seriously, though, Hanson is famous for proposing "idea futures" markets that would presumably provide a greater financial stake in accuracy conducive to prediction than in gaming the system and profitably declaring the results accurate and also for proposing a rather biased conception of what it would mean to overcome bias and what such an overcoming would be good for, neither of which seem to me particularly attuned to human personality or history, but the reductionisms implied in which, I imagine, must be especially compelling to those market libertarian types who already like to indulge fantasies that there are no rational conflicts among people, that all contracts are noncoercive by fiat whatever terms of misinformation or duress articulate their terms, and that market orders arise spontaneously from natural forces of supply and demand wherever states do not hinder them rather than contingent parochialisms utterly dependent for their formation and maintenance on laws, treaties, norms, and infrastructural affordances usually dictated by incumbent elites to the detriment of majorities. Well, maybe you have to be an "em" to really get it.
11 comments:
> Sure, there is lots of loose marketing and promotional and
> advertising talk of computers being brains. . .
Which goes right back to the beginning of commercial digital
computers in the 50's.
Univac - the "giant electronic brain":
http://www.youtube.com/watch?v=O7-j3oVMaOA
Presumably, the ad copywriters were just expanding on the
tropes that the journalists came up with in the 40's:
http://www.seas.gwu.edu/~mfeldman/csci110/summer07/eniac2.pdf
------------------
So on Saturday, February 14, 1946, the press was invited to
the Moore School of Engineering in Philadelphia for the public
unveiling of ENIAC. . . Based upon that demonstration and
other photos taken inside of ENIAC, the press corps
developed their stories that appeared in Sunday
newspapers the following day. The event was hailed in newspapers
all over the United States and Europe, and it provided the public
with its first view of large-scale, high-speed computers.
Rather than showing the picture of the eight men in the group
photo, the newspapers published pictures showing a huge
room with wires, switches, and lights. In this room humans
were seen walking around inside and looking very small
and fragile by comparison. In these early pictures, the
humans, who were entering the data and examining the results,
appeared to be serving the demands of the machine rather
than vice versa, much like the images seen previously in
science fiction classics, such as the 1927 Fritz Lang film,
_Metropolis_.
In bold headlines seen around the world, metaphorical images
such as electronic brain, magic brain, wonder brain, wizard,
and man-made robot were used to describe the new calculating
machine to an awestruck public. Examples of these headlines. . .
demonstrate how newspapers tried to outdo each other in making
flamboyant claims about ENIAC. Several months later a picture
of ENIAC was actually shown in the June, 1946 issue of
_Mechanix Illustrated_ superimposed over the picture of a
human brain!
After that initial press conference, occasional attempts were made
by the press to correct misconceptions about the new computing
devices. For example, in April of 1946 it was stated in
the _Washington News_ that "Electronic Super-Brain Has One
Limitation. . . these electronic 'super-brains' are, of course,
unable to do any actual thinking. . ." For the most part,
however, anthropomorphic references in headlines continued to shape
the public perception of computers for years to come. ENIAC was
referred to as a child, a mathematical Frankenstein, a mechanical
Einstein, a whiz kid, a predictor and controller of weather, and
a wizard. Even headlines characterizing ENIAC as a calculator or
computer used metaphorical language that raised public expectation
and even fear of the new machines. . .
[T]he _London Times_ published an article on ENIAC on November 1, 1946
headlined, "An Electronic Brain: Solving Abstruse Problems;
Valves with a Memory." [Renowned British physicist and mathematician]
Dr. [D. R.] Hartree immediately wrote a letter to the editor
criticizing the headlines. His response was printed the next week
under the banner, "The 'Electronic Brain': A Misleading Term;
No Substitute for Thought."
Unfortunately, his objections fell on deaf ears and the members of
the British press corps, like their American counterparts, continued
to use anthropomorphic and awesome characterizations for the computers
subsequently announced in Britain. . .
In spite of efforts to clear up misconceptions about the new
computers, the press continued to present exaggerated metaphorical images
of computers up into the early 1960s. . .
[E]arly public attitudes toward computers were shaped by the press.
Like many other examles of scientific discovery during the last 50 years,
the press consistently used exciting imagery and metaphors to describe
early computers. The science journalists covered the development
of computers as a series of dramatic events rather than as an
incremental process of research and testing. Readers were given
hyperbole designed to raise their expectations about the use of the
new electronic brains to solve many different kinds of problems.
This engendered premature enthusiasm, which then led to disillusionment
and even distrust of computers on the part of the public when the
new technology did not live up to expectations.
As late as four decades after the announcement, researchers examining
the public perception of computers continued to find vestiges of a
phenomenon they characterized as an "awesome machine" view of
computers. Surveys of public attitudes about computers conducted
in 1963. . ., in 1971. . ., in 1981. . ., and in 1991. . . all
revealed that a significant number of people still thought of
computers as "awesome thinking machines." They would respond
affirmatively to such statements about computers as a) they can
think like a human being thinks, b) they sort of make you feel
that machines can be smarter than people, c) there is no limit to
what these machines can do, d) electronic brain machines are kind
of strange and frightening, and e) they are so amazing that they
stagger your imagination. These are exactly the images of
computers that the press had consistently presented to the public
for the previous 20 years. Further, the computer attitude research
conducted over the past 30 years suggests that the perception
of computers as awesome thinking machines may have in fact retarded
public acceptance of computers in the work environment, at the
same time that it raised unrealistic expectations for easy
solutions to difficult social problems. . .
====
Of course, the SF authors and movie makers made the most of
the ready-to-hand gee-whizzery.
(_The Desk Set_, 1957
http://www.youtube.com/watch?v=ZK3zmPUxblk ).
Gotta love those blinkenlights, though! ;->
http://www.angelfire.com/scifi/B205/b205.gif
> Oh, there we have it. One of his fellow-faithful transhumanoids is
> calling these non-things "ems" in the futurological hairball he is
> coughing up for Dvorsky to promote in his io9 column.
Auntie Em! Auntie Em! I'll give you Auntie Em!
;->
The two films I teach in my course on the politics of digital networked formations are The Forbin Project and... Desk Set! They are both so good and so thoroughly bonkers.
> The two films I teach in my course on the politics of digital
> networked formations are The Forbin Project and... Desk Set!
> They are both so good and so thoroughly bonkers.
"We've made tremendous strides in this field. Visual readoffs
are all centralized, miniaturized, and set on schematic panels
now. And then the data compiled is all automatically computed.
And there's an automatic typewritten panel on it, you see, so
there's no need. . ."
"Now, now, now, please wait a minute! I don't understand one word
you're saying. But it sounds great! If you say it can be done,
that's good enough for me."
> . . . _Desk Set_ . . .
Of course, let us not minimize or discount what **has** happened
in the previous almost 60 years.
The Internet (and Google, Wikipedia, etc.) have put **almost**
(not quite) the facilities of a research department like the
one in the movie at the fingertips of every Jane and Joe with a PC and a Web
connection (not exactly everyone on the planet, but still. . .).
Witness the paper I quoted from above, which I was able to
locate in a minute or two with a few keystrokes and mouse clicks.
And while real research departments haven't been replaced by these
amenities, no doubt they have been enhanced by them (as indeed
the ladies in the movie turned out to have been in the end).
It isn't exactly Transcendence(TM), but it's still cool. ;->
Hey, I'm as much a big geek as the next big fag. As I have said a before, when it comes to sfnal fandoms and warrantable scientific r&d, let a bazillion flowers bloom! But flim flam is flim flam, pseudo-science is pseudo-science, reactionary politics is reactionary politics, and defensive marginal subcultural identity-movements pining for personal techno-transcendence and to sweep the world and end history quack too much like cults for us to deny calling them cultlike.
> Hanson admits his "em" idea comes on the heels of decade after
> decade after decade of cocksure pronouncements by "researchers"
> like him that they were on the verge of creating artificial
> intelligence even though they were always completely wrong about
> that and, indeed, after all this time look like they haven't
> progressed much toward this goal since they started out. In a
> surprise move that is a surprise to no one who grasps the essential
> identity of futurology with con-artistry, Hanson has decided
> to take the lemon of an AI-discourse characterized equally by
> ignorance, megalomania, and failure and make some lemonade.
Marvin Minsky, as far as I can gather, still dismisses this "bottom-up"
stuff (neural nets, connectionism, brain simulation, artificial evolution)
as magical thinking about AI (and has not changed his mind about this
since the late 50s).
Why I changed from bottom-up to top-down thinking:
http://www.webofstories.com/play/marvin.minsky/26
Our pals at the Machine Intelligence Research Institute
(the erstwhile Singularity Institute for Artificial Intelligence)
also take this line, with a twist -- that it would be
irresponsibly **dangerous** to create an AI without having
a comprehensive top-down theory of how it will behave ahead
of time (and of course, they also believe there's only one person
in the world "rational" enough to invent such a theory).
http://en.wikipedia.org/wiki/Hacker_koan
---------------------
In the days when Sussman was a novice, Minsky once came
to him as he sat hacking at the PDP-6.
"What are you doing?", asked Minsky.
"I am training a randomly wired neural net to play Tic-tac-toe",
Sussman replied.
"Why is the net wired randomly?", asked Minsky.
"I do not want it to have any preconceptions of how to play",
Sussman said.
Minsky then shut his eyes.
"Why do you close your eyes?" Sussman asked his teacher.
"So that the room will be empty."
At that moment, Sussman was enlightened.
====
So, the "bottom-up" con-artists want us to take them seriously because they pretend they can deliver dead-ender GOFAI without understanding intelligence as such, while the "top-down" con-artists want us to take them seriously because they they pretend to still want to understand intelligence to deliver dead-ender GOFAI even though they don't understand it any more than they ever did and show little sign of doing anything substantially different about that.
> The Internet (and Google, Wikipedia, etc.) have put **almost**
(not quite) the facilities of a research department like the
one in the movie at the fingertips of every Jane and Joe with a PC and a Web connection[...]
I don't think so. After all, research departments are made up of experts, learned people who've internalized the methodology and required knowledge base of researching something. To research means to analyse and to judge; you have to take apart the data and decide what's factually wrong or right. Laymen don't have these skills.
If I remember my school days back in the last eon the "research" process boiled down to feeding the project's title into Wikipedia, clicking on the first search result and printing out the particular article page to be read verbatim in front of the class. Bonus points for stumbling over the big words in the text...
Without the ability to comprehend information we end up parroting meaningless strings of data. Therefore education is trumps (not that the techbro fanboys care about such boring, old concepts that are nowhere to be found in their Heinlein or Vinge dime novels).
Researchers may indeed profit from networking and other digital tricks, but as Dale has remarked all those Big Data may also represent a hindrance as it takes longer to be processed and understood.
> Well, maybe you have to be an "em" to really get it.
Em & Em:
http://kruel.co/wp-content/uploads/2012/10/team-basement-640x526.jpg
Post a Comment