Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Tuesday, August 09, 2016

Reminder

Every time you are asked to trust a robot or algorithm with a decision you are actually being asked to trust programmers and owners you don't know to decide for you. Don't let technobabble distract you from the actual actors on stage.

ADDED (From the Moot to this post): The trigger for the posted observation was a headline that flitted by my twitter stream, "Should you trust a robot to decide who should live or die?" or something like that... and although the article was congenially skeptical and critical blah blah blah it seems to me the framing invests robots with agency/responsibility in a way that displaces the indispensable focus of critique away from the people who are responsible for the threats and problems at hand. It is only apparently critical when tech talkers take a break from the usual promotional/self-promotional aria of infantile wish-fulfillment fantasizing about robot gods solving all our problems for us to make a "faux balanced" disasterbatory gesture instead... on the other hand...! concerning bad robots or ubergoo robocalypse or whatever. Both positions occupy the hyperbolic space uniquely nurturing of futurological nonsense while distracting attention from... actual things actual computation actually does and the actual people who fund, code, maintain, own, use these actual things in problematic ways. Arguing with techno-transcendentalists hardened me against such rhetorical tactics, but it is interesting to observe the way mainstream corporate-military tech-talkers who might very well find transhumanists as hilarious as we do nonetheless replicate so many of the go-to strategies of hardcore robocultic interlocutors of yore...  

5 comments:

jimf said...

> Every time you are asked to trust a robot or algorithm. . .

It's not likely you'll be **asked** to trust anything. For example, employees
of the organizations using that "Scout" program from "cybersecurity"
firm Stroz Friedberg (that's supposed to identify disgruntled
employees by filtering e-mail according to a "psycholinguistics"
algorithm) certainly aren't going to be **asked** if they're willing
to submit to that evaluation by software! In fact, they won't even
know their employer is using it (the list of organizations using
that software is kept secret). They'll be **told** that they have to sign,
as a condition of employment, a contract that stiplates that workplace
computers do not belong to them, and that anything they do with the computer
is subject to monitoring, blah blah blah. Same thing with the software
that the three-letter agencies are using to monitor everybody's
communications (whether it's the NSA's "Echelon" or the FBI's
"Carnivore" or whatever the current incarnations of those things might
be). Same thing with the software used to determine who might be
a security risk, or who should be on the "no-fly" list. Same thing
with the software used to determine your credit rating. The software
itself is, in all cases, classified (or at the very least proprietary),
and you, the individual, certainly don't get to know what algorithm
is being used (or even **if** an algorithm is being used).

This will continue for the foreseeable, and continue to escalate.
There **will** be ubiquitous cameras in public places within a few
decades, and if the kind of software Microsoft has described (to read
people facial expressions, or emotional states, or violent intentions)
is used to filter that visual data, then you won't ever be "asked
to trust" that, either.

There may be future whistle-blowers, a la Snowden; there may be court
cases and lawsuits from people alleging they were discriminated against
or fired or passed over for promotion, or denied credit, or denied
permission to travel, or harassed by the police, on the basis of what
they think might be "AI". Those cases will be hard to prove, and
they'll be fighting against a political headwind (I don't see the
"war on terror" ending anytime within my remaining lifetime).

So it goes.

jimf said...

Combining the last two topics here, it wouldn't at all surprise
me if in the not-too-distant future (and I think this is a "grounded"
prediction, rather than a "futurological" one ;-> ), psychiatric diagnoses
themselves will be performed by machine. Hey, I had an EKG machine
diagnosis when I had my intake medical exam with my last employer in 1998;
unfortunately, the machine decided -- incorrectly -- that I had probably
once had a heart attack, which led to a bunch more -- ultimately unnecessary --
tests.

There's a machine psychiatric exam in the 1964 episode of _The Twilight
Zone_ TV series, "Number Twelve Looks Just Like You". Hey, it's viewable online
after all:
http://putlocker.is/watch-the-twilight-zone-tvshow-season-5-episode-17-online-free-putlocker.html
(I'm not responsible for any viruses you might get from this site!
But the video works. ;-> ).

Dale Carrico said...

The trigger for the posted observation was a headline that flitted by my twitter stream, "Should you trust a robot to decide who should live or die?" or something like that... and although the article was congenially skeptical and critical blah blah blah it seems to me the framing invests robots with agency/responsibility in a way that displaces the indispensable focus of critique away from the people who are responsible for the threats and problems at hand. It is only apparently critical when tech talkers take a break from the usual promotional/self-promotional aria of infantile wish-fulfillment fantasizing about robot gods solving all our problems for us to make a "faux balanced" disasterbatory gesture instead... on the other hand...! concerning bad robots or ubergoo robocalypse or whatever. Both positions occupy the hyperbolic space uniquely nurturing of futurological nonsense while distracting attention from... actual things actual computation actually does and the actual people who fund, code, maintain, own, use these actual things in problematic ways. Arguing with techno-transcendentalists hardened me against such rhetorical tactics, but it is interesting to observe the way mainstream corporate-military tech-talkers who might very well find transhumanists as hilarious as we do nonetheless replicate so many of the go-to strategies of hardcore robocultic interlocutors of yore...

jollyspaniard said...

My pet bugaboo is already here. The facebook newsfeed.

jimf said...

_The Twilight Zone_ . . . _Suddenly Last Summer_. . .
and the Stepford Shrinks.

http://www.nytimes.com/2016/08/09/health/brain-patient-hm-book-dittrich.html
-------------
A Brain Surgeon’s Legacy Through a Grandson’s Eyes
A Conversation With
BENEDICT CAREY
AUG. 8, 2016

Luke Dittrich is the author of a new book, “Patient H.M.: A Story of Memory, Madness,
and Family Secrets,” about his grandfather, Dr. William Scoville. . .

In 1953, at Hartford Hospital, Dr. William Scoville had removed
two slivers of tissue from the brain of a 27-year-old man with
severe epilepsy. The operation relieved his seizures but left the
patient — Henry Molaison, a motor repairman — unable to form
new memories. Known as H. M. to protect his privacy, Mr. Molaison
went on to become the most famous patient in the history of
neuroscience, participating in hundreds of experiments that
have helped researchers understand how the brain registers
and stores new experiences. . .

"The textbook story of Patient H. M. — the story I grew up
with — presents the operation my grandfather performed on
Henry as a sort of one-off mistake. It was not. Instead, it
was the culmination of a long period of human experimentation
that my grandfather and other leading doctors and researchers
had been conducting in hospitals and asylums around the country. . .

The lobotomy is usually remembered as a brutal treatment for
mental illness that was ultimately abandoned. . .
[W]hat’s been ignored is that many of the leading doctors
and scientists of the era — including my grandfather, who taught
at Yale and was the director of neurosurgery at Hartford Hospital --
viewed the lobotomy as having not just therapeutic potential,
but also great experimental utility.

The rise of psychosurgery gave doctors and researchers license to perform
on human beings the same sorts of brain-cutting experiments once
limited to chimpanzees. As one lobotomist put it, 'Man is certainly
no poorer as an experimental animal merely because he can talk.'

That attitude had a terrible human cost, and one of the people who
paid the price was Patient H. M. Modern brain science has dark roots. . .

For most of his life,. . . Henry was just a pair of initials
floating in front of a constellation of clinical and experimental
data. His story was tightly controlled by the researchers who’d
built their careers on him and who had an interest in presenting
his story in a particular way. . .

When my grandfather operated on Henry, modern principles of
informed consent didn’t exist. Today, there are relatively good
protections in place for human research subjects.

That said, the best regulations on paper mean nothing without
oversight and enforcement. . .

While researching my grandfather’s career as a lobotomist, it
struck me that a great majority of the people he lobotomized
were women. When you consider that the side effects of the lobotomy --
tractability, passivity, docility — overlap nicely with what many
men considered to be ideal feminine traits, that disparity is
perhaps not surprising. . ."
====