Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Monday, March 27, 2017

Defying Gravity

PoliticalWire: The latest Gallup daily tracking poll shows President Trump’s approval rate crashing to 36% to 57%. [This is another plummet from the number, already unprecedented enough to get a post, last week!] "Trump’s current 36% is two percentage points below Barack Obama’s low point of 38%, recorded in 2011 and 2014. Trump has also edged below Bill Clinton’s all-time low of 37%, recorded in the summer of 1993, his first year in office, as well as Gerald Ford’s 37% low point in January and March 1975. John F. Kennedy’s lowest approval rating was 56%; Dwight Eisenhower’s was 48%."

12 comments:

jimf said...

> . . .President Trump’s approval rate crashing. . .

https://www.nytimes.com/2017/03/28/business/economy/trump-white-working-class-carnage.html
------------
‘Carnage’ Indeed, but Trump’s Policies Would Make It Worse
Eduardo Porter
ECONOMIC SCENE
MARCH 28, 2017

Donald J. Trump can be brilliant. On the campaign trail, his
diagnosis of the raw anger and disillusionment among white
working-class Americans bested the most sophisticated analyses
from the professional political class.

His description of “American carnage” in his
Inaugural Address — complete with “rusted-out factories
scattered like tombstones across the landscape,”
impoverished mothers and children, crime, drugs that
“robbed our country of so much unrealized potential” — struck
a nerve with millions of voters who feel left behind
by a country buffeted by demographic, technological and
social change. . .

[But Trump's] initial forays into social and economic
policy making raise an uncomfortably raw question: Was
his appeal to the troubled working class a con? [Ya think?! :-0]
If anything, his proposals look like a scheme to make the
carnage worse.

Last week, the Princeton economists Anne Case and Angus Deaton
unveiled new research offering a bleak portrait of Mr. Trump’s
base of white men and women without a bachelor’s degree:
They are, indeed, dying in droves, committing suicide and
poisoning themselves with drugs and alcohol at much higher
rates than blacks, Hispanics, or men and women in other
advanced countries.

“Deaths of despair,” Professors Case and Deaton call them.
From 1998 through 2015, the mortality rate of white non-Hispanic
men and women with no more than a high school diploma
increased in every five-year age group, from 25-to-29 to
60-to-64, they found.

The desperation took time to build — 40 or 50 years maybe,
as automation and globalization killed jobs on the factory
floor. Squeezed into insecure, low-wage jobs in the service sector,
many workers lacking the higher education required to profit
from the new economy simply left the job market. . .

These economic changes affected all workers with scant education,
of course. But whites suffered a deeper blow: In 1999 mortality
rates of whites with no college were around 30 percent lower
than those of blacks as a whole, Professors Deaton and Case
found. By 2015 they were 30 percent higher. It seems that
blacks and Latinos, whose memories of the halcyon days of
manufacturing in the early 1970s are colored by the stain
of discrimination, suffered less of a loss.

This is hardly a fitting picture for one of the most affluent
societies in human history. But there you go. . .
====


Oh Geeze! Stifle yourself Edith!

jimf said...

> The desperation took time to build — 40 or 50 years maybe,
> as automation and globalization killed jobs on the factory
> floor.

So I visited Barnes & Noble today, looking for _American War_
by Omar El Akkad. They didn't have it (they never do, that soon
after I read a review), but I was AI'd coming and going.

First, I switched on my "elite" radio station in the car
(NPR station WNYC) and heard Leonard Lopate interviewing somebody
gushing about the promise and threat of neural networks:

http://www.wnyc.org/story/how-advancing-ai-means-understanding-our-machines-less/
-------------
Mar 29, 2017

Jeff Wise, a journalist who specializes in aviation, adventure and psychology,
joins us to discuss his latest article, “When Machines Go Rogue” for
The Outline. Wise explains why it’s becoming more difficult to understand
the choices our machines make as we integrate more advanced
artificial intelligence into systems like self-driving cars. . .
====

I didn't find the book I was looking for, but I passed a New Non-Fiction
display containing Yuval Harari's _Homo Deus_:
https://www.amazon.com/Homo-Deus-Brief-History-Tomorrow/dp/0062464310/

I also flipped through _The Knowledge Illusion: Why We Never Think Alone_
by Steven Sloman and Philip Fernbach
https://www.amazon.com/Knowledge-Illusion-Never-Think-Alone/dp/039918435X/
which contains the passage:

"The technological revolution has improved our lives in some ways, but it
has also given rise to worry, despair, and even dread. Technological
change is leading to all kinds of effects, and some may not be quite what
we bargained for.

Some of our greatest entrepreneurs and scientific minds see even darker
clouds on the horizon. People like Elon Musk, Stephen Hawking, and Bill Gates
have cautioned that technology could become so sophisticated that it
decides to pursue its own goals rather than the goals of the humans who
created it. The reason to worry has been articulated by Vernor Vinge in
a 1993 essay entitled "The Coming Technological Singularity," as well as
by Ray Kurzweil in his 2005 book _The Singularity Is Near: When Humans
Transcend Biology_, and most recently by Swedish philosopher Nick Bostrom,
who works at the University of Oxford. In Bostrom's language, the fear
is that technology is advancing so fast that the development of a superintelligence
is imminent.

A superintelligence is a machine or collection of machines whose mental
powers are far beyond that of human beings. . ."


Well golly, Batman!

jimf said...

Then I hit the magazine section, and was immediately
confronted with the cover of _Discover_ magazine:

"Artificial Intelligence: Can We Build Machines With Common Sense?"
http://discovermagazine.com/2017/april-2017

Then I glanced at _The New Yorker_, and saw:

http://www.newyorker.com/magazine/2017/04/03/ai-versus-md
-------------
"The Algorithm Will See You Now
When it comes to diagnosis, will A.I. replace the M.D.?"
By Siddhartha Mukherjee
====

and also, in the same issue:

http://www.newyorker.com/magazine/2017/04/03/silicon-valleys-quest-to-live-forever
-------------
"Silicon Valley’s Quest to Live Forever"
By Tad Friend
Can billions of dollars’ worth of high-tech research succeed in making death optional?
====

It's like the Extropians' mailing list has materialized at
the bookstore, but 15 years later!

However, _The Atlantic_ had a somewhat less upbeat article about the high techies
(oh, those feminists and SJWs. Always whining about something!):

https://www.theatlantic.com/magazine/archive/2017/04/why-is-silicon-valley-so-awful-to-women/517788/
-------------
Why Is Silicon Valley So Awful to Women?
Liza Mundy
April 2017

Tech companies are spending hundreds of millions of dollars to improve conditions
for female employees. Here’s why not much has changed—and what might actually work.

. . .

Because Silicon Valley is a place where a newcomer can unseat the
most established player, many people there believe -- despite evidence
everywhere to the contrary -- that tech is a meritocracy. Ironically
enough, this very belief can perpetuate inequality. A 2010 study,
“The Paradox of Meritocracy in Organizations,” found that in cultures
that espouse meritocracy, managers may in fact “show greater bias
in favor of men over equally performing women.” . . .

Such bias may be particularly rife in Silicon Valley because
of another of its foundational beliefs: that success in tech
depends almost entirely on innate genius. Nobody thinks that of
lawyers or accountants or even brain surgeons; while some people
clearly have more aptitude than others, it’s accepted that law school
is where you learn law and that preparing for and passing
the CPA exam is how you become a certified accountant.
Surgeons are trained, not born. In contrast, a 2015 study published
in Science confirmed that computer science and certain other fields,
including physics, math, and philosophy, fetishize “brilliance,”
cultivating the idea that potential is inborn. The report
concluded that these fields tend to be problematic for women,
owing to a stubborn assumption that genius is a male trait. . .

“The more a field valued giftedness, the fewer the female PhDs,”
the study found, pointing out that the same pattern held for
African Americans. Because both groups still tend to be
“stereotyped as lacking innate intellectual talent,” the study
concluded, “the extent to which practitioners of a discipline
believe that success depends on sheer brilliance is a strong
predictor of women’s and African Americans’ representation.” . . .
====


See also:

https://www.youtube.com/watch?v=6lsa_97KIlc
The Bell Curve: IQ, Race and Gender
Charles Murray and Stefan Molyneux
Published on Sep 14, 2015

:-0

jimf said...

> http://www.wnyc.org/story/how-advancing-ai-means-understanding-our-machines-less/
> -------------
> Mar 29, 2017
>
> Jeff Wise, a journalist who specializes in aviation, adventure and psychology,
> joins us to discuss his latest article, “When Machines Go Rogue” for
> The Outline. Wise explains why it’s becoming more difficult to understand
> the choices our machines make as we integrate more advanced
> artificial intelligence into systems like self-driving cars. . .
> ====


http://jeffwise.net/2017/03/15/when-machines-go-rogue/
-----------
The Outline: When Machines Go Rogue
March 15, 2017
Posted in: Aviation

. . .

Midnight, January 8, 2016. High above the snow-covered
tundra of arctic Sweden, a Canadair CRJ-200 cargo jet made
a beeline through the -76 degree air. Inside the cockpit,
the pilot in command studied the approach information for
Tromsø, Norway. . .

[W]ithout being able to see the ground, it’s almost impossible
to accurately judge whether you’re climbing or turning.
A pilot must trust his instruments completely. . .

A klaxon sounded: The autopilot had turned itself off. . .

[T]he Air Data Inertial Reference Unit, or ADIRU — a device that
tells the plane how it’s moving through space — had begun to send
erroneous signals. . .

One minute and 20 seconds into the incident, the jet hit the
frozen ground with the velocity of a .45 caliber bullet. The impact. . .
carved a 20-foot-deep crater 50 feet across. When search-and-rescue
helicopters arrived that morning, all that remained was an
asterisk-shaped smudge of black on the flat whiteness of the valley floor.

Accident investigators still haven’t figured out what went wrong
with the Inertial Reference Unit. . .

---

Jeff Wise
Posted March 15, 2017

I actually took out of this draft a paragraph about the unwanted effects
of automating Facebook newsfeeds, which led to the ghettoization
of misinformation and hence to Donald Trump’s election as president.
I don’t think anyone can point to a more catastrophic unintended
outcome than that.
====


Weeell. . . I think that's giving Facebook (and whatever AIs it's
employing ;-> ) a bit more credit for the Trump Catastrophe
than it deserves.

jimf said...

> . . . the jet hit the frozen ground with the velocity
> of a .45 caliber bullet. . . Accident investigators still
> haven’t figured out what went wrong with the Inertial Reference Unit. . .

Of course, this real-life autopilot malfunction, as
tragic as its consequences were, still lacks the main
maguffin of an "AI thriller" such as 1977's
_The Adolescence of P-1_.

-------------
"Hey Wimpy, c'mere and look at this fuckin' display!. . .
[said the controller on] the top deck of the
National Airport flight control tower. . .

"Look at this thing. . . Western 624 from
Minneapolis. . . I picked that fucker up at 1800 feet
on the lander. I walked him down to 900. The son of
a bitch just jumped up 400 feet. I know he was at 900 last
time I looked. He can't be at 13. . ."

"Jumped up. Jumped up? It don't just jump up. You
fell asleep."

"Bullshit! It went from 9 to 13 in one sweep! I was
watching it when it changed. . ."

"Six-two-four! Get up! Get the hell out of here!"
He let go of the transmit key. The background noise was
back. They had just snuffed one.

The kid pitched across the scope, his lunch splashing
noisily on the floor. . .

P-1 put it all in perspective several hours later in the
following communiqué to Gregory. It was issued via a
graphic display tube. There were no witnesses and no copies.

BURKE WAS THE ANTAGONIST. HIS DEATH BOTHERS ME LITTLE. . .
HE WAS LISTED AMONG THE VICTIMS OF A WESTERN AIRLINES
BOEING 727 THAT WAS MISPLACED ON RADAR WHILE LANDING IN
THE FOG AT NATIONAL AIRPORT IN WASHINGTON, D.C. THIS
AFTERNOON. I FEEL NO REMORSE. . .

"There was a Western Airlines commercial flight," [General]
Simpson said. . . "lost in fog at National across the river
last month. One of the Criminal Investigation Division's
prime operatives was lost in the incident. It looked like a
surreptitious ground navigation equipment failure. I say
surreptitious because the malfunction corrected itself
immediately [after] the plane went down. We have been
investigating the event with more than the usual
thoroughness. . ."

[General] Melton deadpanned, "How, specifically, do you
intend to defend against a homicidal computer, General?" . . .

[Admiral] Virdell [said], "For my part, until the situation
is corrected, I have no intention of flying anywhere. Anyone
in the group who does is mad. It might also be wise to avoid
other contrivances that are closely linked to computer control."

Admiral Virdell's knowledge of what might or might not be
linked to a computer was nearly as extensive as his grasp
of the topography of the far side of the moon. General Melton,
who had been least (apparently) affected by the announcement
of the crash, spoke up. "Of course, there's not the slightest
shred of evidence that either plane was brought down by
contrivance of any sort, let alone that of a computer. . ."
====

jimf said...

AI, AI, AI, AI! Canta y no llores. . .

https://www.youtube.com/watch?v=gLKmKqrNUKY
---------------
Joe Rogan and Lawrence Krauss on artificial intelligence
Joe Rogan University - Fan Channel
Published on Mar 28, 2017

0:43/21:21 Krauss: Now AI is gonna change the world. The future
is not gonna be like the past. What it means to be human is
not gonna be like it was in the past. Get over it.
Now, the question is -- some of those things which we think
are horrible, may not be so bad. For example, the ancient Greeks
thought the introduction of writing would be horrible, because
oral story-telling would be destroyed. But writing wasn't such
a bad thing -- it actually made the world maybe a more interesting
place. . . So yes, AI is both terrifying and exciting. The
future is terrifying and exciting. . . I'm really excited by
the possibility that AI might become better physicists than us. . .
So maybe they'll be the dominant physicsts in the future or
the dominant academics, and we can learn from them. . . That
wouldn't be so bad! . . .

Rogan: That's really rose-colored glasses, though. [5:37] What
I'm worried about with AI is that we're looking at it as if
it's a human invention -- which it most certainly is, but it's
also a life-form. . .

Krauss: Yeah? OK, so, big deal.

Rogan: But it decides to make a better version of itself, and
it continues to do that. . .

Krauss: It will!

Rogan: . . . and we're gonna be completely obsolete within a short
amount of time.

Krauss: Great!

Rogan: Really?

Krauss: Well, I mean, no -- so, that could be good or bad. . .

Rogan: It's gonna suck!

Krauss: So, this is your illusion that you're significant. . .

Rogan: No, it's not. I'd like to stay alive long enough to die
of old age. Not be eaten by robots. . .

Krauss: Why would they feel it's necessary to destroy us?

Rogan: Because we're polluting the environment, we might screw up
the world.

Krauss: When we're not governing things we might not be. They might
wanna save us like we do the turtles. . .

Rogan: Chimps. . .

Krauss: In various places they don't want the lights to happen 'cause
the turtles don't mate if the lights are on the beach. . .

Rogan: We don't do such a good job about that.

Krauss: We don't! But they'll be better than us, right?

Rogan: But we might be an evil, hyena-like species. . .

Krauss: If we are, then why should we be around?

Rogan: That's a good question, but I mean, I'm worried about that. . .
It won't give us a chance to get better. . .

jimf said...

Krauss: But maybe you could view them as your offspring.

Rogan: Oooh, boy! That's optimistic! . . . [7:59] That's what's
really fascinating, the idea that. . . we're, not just the creator. . .
but the predecessors of some greater species. . .

Krauss: And who knows? Who knows what the future will bring?
But to be **afraid** of the future. . .

Rogan: Well, it's inevitable.

Krauss: Ultimately what can happen will happen, we just have to
accept that. And we have to try to prepare for it as best as possible
to try and make sure it works out as well as possible. . .
[Quoting Vergil's Aeneid] Release your fear! The stuff of our mortality
does cut us to the heart. But release your fear! Use it to make our
brief moment in the sun more precious.

Rogan: It's fascinating to me that we're so connected to this particular
form that we find ourselves in now. . . that even though we know that
we're a finite life-form as individuals. . .

Krauss: Oh, some people don't. . . Yeah, go on.

Rogan: . . . irrationally. But we know that we're a finite life-form;
we would like to think that we stay in this state for as long as history
allows. . .

Krauss: But of course we're just temporary -- even as humans,
even as hominids, homo sapiens has only been around for a speck of
time. . . And who would expect our future to be the same?

Rogan: Of course, it can't be.

Krauss: What if the things that are wonderful about our culture are
preserved by our descendants, but our descendants aren't carbon-based?
OK. So what's wrong with that? Why do you care if your great-great-great-...
grandchildren look like you?

Rogan: Right.

Krauss: Of course we all do because we want some immortality.
OK, so the robots are made to look like you. I mean, I don't
care, you know, [if] they all have Joe Rogan faces on them. . .

jimf said...

[13:57] Rogan: One of the things that freaks me out is that
what we consider life when we think about instincts and needs and
desires. . . those won't necessarily be programmed at all into
any artificial life.

Krauss: Well, one of the questions that arises -- and this is a huge
point of discussion among AI researchers, 'cause I've been to a bunch
of meetings in preparation for our meeting, is whether -- and I find
this statement almost vacuous, but I'm amazed that they use it all
the time -- to program machines with "human values". . . And my
problem is, what are "human values"? And a very smart guy -- I won't
say who -- said to me, "well, they just have to watch us." And I
said, "What do you mean -- they watch Donald Trump and they know what
human values are?" I mean -- come on! I'm not sure there are universal
human values, so how do we program them in? But nevertheless, the question
is, do we want to align their programming in terms of what we think
will be beneficial to us? 'Cause after all, we're programming 'em,
OK?

Rogan: So do we impart saint-like values?

Krauss: Who knows? I mean, I find that a very interesting question,
and a very difficult one to resolve. My own feeling is -- if it were
up to me, and it's not an area of active research for me -- you
produce the smartest machines you can. Just like -- you have kids,
I have kids -- do we want them to believe everything we believe?
No, we want them to become the most capable human beings they can
be so that they can go out and do the best stuff. So why is it
different for a computer? I'd want to make the most capable, intelligent,
resourceful machine I ever could, 'cause then I at least -- all the
evidence suggests to me that that machine will make the best decisions.

Rogan [20:15]: The question would be, what would the motivation of
artificial intelligence be, if it doesn't have -- we're essentially
riding on the motivations of our ancient genetics, right?

Krauss: Oh, sure. We wanna have sex, for example.

Rogan: Yes, exactly.

Krauss: It'd be interesting to see! Who knows? Won't it be interesting
to find out?

Rogan: Why would the motivation be creative?

Krauss: Because we've developed in them problem-solving capabilities.
And because they're self-aware, they may want to improve their understanding
of the world -- partly for technology, they may want to make the world
better for themselves. All sorts of reasons. But, we'll see. To some
extent we'll input it in programming, but to some extent we'll see.
And to some people that's terrifying, that we won't know the motivations.

Rogan: Of course it's terrifying.

Krauss: I'm not as terrified about it, I guess. I'm concerned that we
gotta make sure we understand what we're doing at each step so we don't
produce massive negative results that could have been avoided. But I'm
not as concerned that the future'll be different than the past.
I hope it is!
====

jimf said...

> So I visited Barnes & Noble today, looking for _American War_
> by Omar El Akkad. They didn't have it (they never do, that soon
> after I read a review), but I was AI'd coming and going.

https://www.nytimes.com/2017/03/30/books/boom-times-for-the-new-dystopians.html
----------------
Boom Times for the New Dystopians
By ALEXANDRA ALTER
MARCH 30, 2017

When Omar El Akkad was writing his debut novel, “American War,”
about a futuristic not-so-United States that has been devastated
by civil war, drone killings, suicide bombings and the ravages
of climate change, he didn’t have to invent much. The ruined
landscape and societal collapse he envisioned was based partly
on scenes he had witnessed as a war correspondent in Afghanistan. . .

But a strange thing happened after Mr. El Akkad finished the novel.
The calamities he described began to seem more like grim prophecy
than science fiction. The widening ideological gulf between
red and blue America, which has only deepened after the presidential
election, has applied an unintended patina of urgency and timeliness
to his story. . .

Similar catastrophic events propel Zachary Mason’s “Void Star,”
a mind-bending novel in which rising seas have rendered large swaths
of the planet uninhabitable, and impoverished masses huddle in favelas
in San Francisco and Los Angeles, while the rich have private armies
and armored self-driving cars and undergo life-extending medical treatments.
Mr. Mason, a computer scientist who specializes in artificial intelligence,
envisioned a world where the boundaries between machines and people
have grown increasingly porous, and a powerful, godlike A.I. hacks into
people’s minds. . .

For readers longing for a sliver of utopia, slightly less alarming
visions of the future can be found in Kim Stanley Robinson’s “New York 2140,”
in which the city is partly submerged by rising oceans but remains vibrant. . .
====


Wot, no "godlike A.I." in New York City? How dreary! (Come on guys,
it's been 33 years since _Neuromancer_!)

jimf said...

> Come on guys, it's been 33 years since _Neuromancer_!

But just wait for the next 33 years!

https://singularityhub.com/2017/03/31/can-futurists-predict-the-year-of-the-singularity/
---------------
The end of the world as we know it is near. And that’s a good thing,
according to many of the futurists who are predicting the imminent
arrival of what’s been called the technological singularity.

The technological singularity is the idea that technological progress,
particularly in artificial intelligence, will reach a tipping point to
where machines are exponentially smarter than humans. It has been a
hot topic of late.

Well-known futurist and Google engineer Ray Kurzweil (co-founder and
chancellor of Singularity University) reiterated his bold prediction
at Austin’s South by Southwest (SXSW) festival this month that machines
will match human intelligence by 2029 (and has said previously the
Singularity itself will occur by 2045). That’s two years before
SoftBank CEO Masayoshi Son’s prediction of 2047, made at the
Mobile World Congress (MWC) earlier this year. . .

That merger of man and machine -- sometimes referred to as transhumanism --
is the same concept that Tesla and SpaceX CEO Elon Musk talks about
when discussing development of a neural lace.
[ https://techcrunch.com/2017/01/25/elon-musk-could-soon-share-more-on-his-plan-to-help-humans-keep-up-with-ai/ ]
For Musk, however, an interface between the human brain and computers
is vital to keep our species from becoming obsolete when the singularity hits.

Musk is also the driving force behind Open AI, a billion-dollar nonprofit
dedicated to ensuring the development of artificial general
intelligence (AGI) is beneficial to humanity. . .

Futurist Ben Goertzel, who among his many roles is chief scientist at
financial prediction firm Aidyia Holdings and robotics company Hanson Robotics
(and advisor to Singularity University), believes AGI is possible well
within Kurzweil’s timeframe. . .
====

"Neural lace", huh? (No mention of Iain M. Banks and the Culture, though. ;-> ).

I gather there's no connection between "Hanson Robotics" and
Robin Hanson, though.

> No mention of Iain M. Banks and the Culture, though.

Oh.

https://motherboard.vice.com/en_us/article/why-are-elon-musk-and-mark-zuckerberg-reading-utopian-sci-fi
---------------
Why Are Elon Musk and Mark Zuckerberg Reading Utopian Sci-Fi?
Alix Jean-Pharuns
Jul 10 2015

Two of tech's biggest names, SpaceX's Elon Musk and Facebook's
Mark Zuckerberg, are reading speculative fiction by the late
Scottish author Iain M Banks. What's with the sudden interest
in utopian sci-fi?

This isn't the first time Musk has shown appreciation for
Banks's writing. He's shown some love to Banks by painting the
names of some of the sentient spacecraft from his books on
SpaceX's drone ships, such as Just Read the Instructions
and Of Course I still Love You. . .

Musk is reading a book from the series called Excession,
while Zuckerberg The Player of Games last month. . .
====

Just a little late to the party. They do realize that Banks
was a (gasp!) **socialist** right? (I guess they'll find out
what happened to J. Veppers in _Surface Detail_ :-0 ).

jimf said...

> They do realize that Banks was a (gasp!) **socialist** right?

Unlike **some** people.

https://www.nytimes.com/2017/03/30/business/edward-lampert-sears-kmart.html
------------
Sears and Its Hedge Fund Owner, in Slow Decline Together
By JAMES B. STEWART
MARCH 30, 2017

Hedge funds have been failing over the last year at the fastest
rate since the financial crisis in 2008. Some crashed and burned
after sudden reversals. Others quietly liquidated.

Then there’s Edward S. Lampert’s ESL Investments. It hasn’t failed,
but may be setting a benchmark for slow, painful declines thanks
to its outsize, long-term bet on two venerable retailers,
Sears and Kmart. . .

Few hedge fund managers have been as celebrated as Mr. Lampert
in his heyday, which now appears to be the mid-2000s. Mr. Lampert
was a Wall Street wunderkind, a Goldman Sachs intern whose intellect,
ingratiating personality and prodigious work ethic attracted the
patronage of some of America’s most prominent and successful investors. . .

Mr. Lampert was 25 years old and at the vanguard of the hedge fund
movement when he founded ESL in 1988 with $28 million in seed money. . .

In 2006, Forbes ranked him No. 67 on its list of the 400 richest
Americans, with a net worth of $3.8 billion, a few notches ahead of
another retailing executive, Jeff Bezos of Amazon. He was widely
hailed as another Warren Buffett, only perhaps even smarter. . .

Where did someone as smart, successful and hard-working as Mr. Lampert
go wrong? . . . Several former Lampert investors told me that Mr. Lampert’s
fundamental mistake was one common to many once-successful
hedge fund managers: hubris, and the belief that investment prowess
would translate into management skill.

Mr. Lampert remains one of the country’s richest people. He owns
lavish homes in Greenwich, Conn.; Aspen, Colo.; and Miami as well
as a 288-foot yacht, Fountainhead, named after the Ayn Rand novel
(Mr. Lampert is a devotee of the author). Last year Forbes ranked him
tied for No. 309 on its annual list of the 400 richest Americans,
with a net worth of $2.3 billion. . .
====


You don't suppose Howard Roark and John Galt ever shopped at Sears,
do you?

Hedge Funds and Iain Banks -- I'm reminded of Adrian Cubbish in
_Transition_. ;->

jimf said...

> Hedge Funds and Iain Banks -- I'm reminded of Adrian Cubbish in
> _Transition_.

------------
The people who turn out to be capable of flitting
amongst the many worlds are almost without exception selfish,
self-centred individuals and individualists, people who think
rather highly of themselves and exhibit or at least possess a
degree of scorn for their fellow humans; people who think that the
rules and limitations that apply to everybody else don't or
shouldn't apply to them. They are people who already feel that
they live in a different world to everybody else, in other words.
As a specialist from the UPT's Applied Psychology Department
expressed it to me once, such individuals are some lopsided
distance along the selfless--selfish spectrum, clustered
close to the latter, hard-solipsism end.

Clearly, if left to their own devices such rampant egoists might
misuse their skills and abilities to pursue their own agendas
of self-glorification and self-aggrandizement. . .

“Why, Mrs Mulverhill, you’re a conspiracy theorist!” . . . “You
missed out Serge Anstruther.”

“Yerge Aushauser. No, he really was a shit. He wasn’t really a
genocidal racist as such but whenever he’s not stopped he ends up
causing such havoc he might as well have been. Wanted to buy
up a state in the US midwest and build an impregnable Nirvana
for the super-rich; Xanadu, Shangri-La. Fantasy made real.
A Libertarian.” From his expression she must have thought he
wasn’t entirely familiar with the term. She sighed. “Libertarianism.
A simple-minded right-wing ideology ideally suited to those
unable or unwilling to see past their own sociopathic self-regard.”

“You’ve obviously thought about it.”

“And dismissed it. But expect to hear a lot more about it as
Madame d’O consolidates her power-base – it’s a natural fit for
people just like you, Tem.”
====