tag:blogger.com,1999:blog-5956838.post447179450907301684..comments2023-11-22T01:14:54.298-08:00Comments on amor mundi: ReminderDale Carricohttp://www.blogger.com/profile/02811055279887722298noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-5956838.post-58232245904706620172016-08-10T10:09:59.879-07:002016-08-10T10:09:59.879-07:00_The Twilight Zone_ . . . _Suddenly Last Summer_. ..._The Twilight Zone_ . . . _Suddenly Last Summer_. . .<br />and the Stepford Shrinks.<br /><br />http://www.nytimes.com/2016/08/09/health/brain-patient-hm-book-dittrich.html<br />-------------<br />A Brain Surgeon’s Legacy Through a Grandson’s Eyes<br />A Conversation With<br />BENEDICT CAREY<br />AUG. 8, 2016<br /><br />Luke Dittrich is the author of a new book, “Patient H.M.: A Story of Memory, Madness,<br />and Family Secrets,” about his grandfather, Dr. William Scoville. . .<br /><br />In 1953, at Hartford Hospital, Dr. William Scoville had removed<br />two slivers of tissue from the brain of a 27-year-old man with<br />severe epilepsy. The operation relieved his seizures but left the<br />patient — Henry Molaison, a motor repairman — unable to form<br />new memories. Known as H. M. to protect his privacy, Mr. Molaison<br />went on to become the most famous patient in the history of<br />neuroscience, participating in hundreds of experiments that<br />have helped researchers understand how the brain registers<br />and stores new experiences. . .<br /><br />"The textbook story of Patient H. M. — the story I grew up<br />with — presents the operation my grandfather performed on<br />Henry as a sort of one-off mistake. It was not. Instead, it<br />was the culmination of a long period of human experimentation<br />that my grandfather and other leading doctors and researchers<br />had been conducting in hospitals and asylums around the country. . .<br /><br />The lobotomy is usually remembered as a brutal treatment for<br />mental illness that was ultimately abandoned. . .<br />[W]hat’s been ignored is that many of the leading doctors<br />and scientists of the era — including my grandfather, who taught<br />at Yale and was the director of neurosurgery at Hartford Hospital --<br />viewed the lobotomy as having not just therapeutic potential,<br />but also great experimental utility.<br /><br />The rise of psychosurgery gave doctors and researchers license to perform<br />on human beings the same sorts of brain-cutting experiments once<br />limited to chimpanzees. As one lobotomist put it, 'Man is certainly<br />no poorer as an experimental animal merely because he can talk.'<br /><br />That attitude had a terrible human cost, and one of the people who<br />paid the price was Patient H. M. Modern brain science has dark roots. . .<br /><br />For most of his life,. . . Henry was just a pair of initials<br />floating in front of a constellation of clinical and experimental<br />data. His story was tightly controlled by the researchers who’d<br />built their careers on him and who had an interest in presenting<br />his story in a particular way. . .<br /><br />When my grandfather operated on Henry, modern principles of<br />informed consent didn’t exist. Today, there are relatively good<br />protections in place for human research subjects.<br /><br />That said, the best regulations on paper mean nothing without<br />oversight and enforcement. . .<br /><br />While researching my grandfather’s career as a lobotomist, it<br />struck me that a great majority of the people he lobotomized<br />were women. When you consider that the side effects of the lobotomy --<br />tractability, passivity, docility — overlap nicely with what many<br />men considered to be ideal feminine traits, that disparity is<br />perhaps not surprising. . ."<br />====jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-78617366843910196902016-08-10T04:29:42.622-07:002016-08-10T04:29:42.622-07:00My pet bugaboo is already here. The facebook newsf...My pet bugaboo is already here. The facebook newsfeed.<br />jollyspaniardhttps://www.blogger.com/profile/10999141103840765243noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-77402458785059498382016-08-09T18:14:30.252-07:002016-08-09T18:14:30.252-07:00The trigger for the posted observation was a headl...The trigger for the posted observation was a headline that flitted by my twitter stream, "Should you trust a robot to decide who should live or die?" or something like that... and although the article was congenially skeptical and critical blah blah blah it seems to me the framing invests robots with agency/responsibility in a way that displaces the indispensable focus of critique away from the people who are responsible for the threats and problems at hand. It is only <i>apparently critical</i> when tech talkers take a break from the usual promotional/self-promotional aria of infantile wish-fulfillment fantasizing about robot gods solving all our problems for us to make a "faux balanced" disasterbatory gesture instead... on the other hand...! concerning bad robots or ubergoo robocalypse or whatever. Both positions occupy the hyperbolic space uniquely nurturing of futurological nonsense while distracting attention from... actual things actual computation actually does and the actual people who fund, code, maintain, own, use these actual things in problematic ways. Arguing with techno-transcendentalists hardened me against such rhetorical tactics, but it is interesting to observe the way mainstream corporate-military tech-talkers who might very well find transhumanists as hilarious as we do nonetheless replicate so many of the go-to strategies of hardcore robocultic interlocutors of yore...Dale Carricohttps://www.blogger.com/profile/02811055279887722298noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-4113302669762656132016-08-09T16:00:16.306-07:002016-08-09T16:00:16.306-07:00Combining the last two topics here, it wouldn'...Combining the last two topics here, it wouldn't at all surprise<br />me if in the not-too-distant future (and I think this is a "grounded"<br />prediction, rather than a "futurological" one ;-> ), psychiatric diagnoses<br />themselves will be performed by machine. Hey, I had an EKG machine<br />diagnosis when I had my intake medical exam with my last employer in 1998;<br />unfortunately, the machine decided -- incorrectly -- that I had probably<br />once had a heart attack, which led to a bunch more -- ultimately unnecessary --<br />tests.<br /><br />There's a machine psychiatric exam in the 1964 episode of _The Twilight<br />Zone_ TV series, "Number Twelve Looks Just Like You". Hey, it's viewable online<br />after all:<br />http://putlocker.is/watch-the-twilight-zone-tvshow-season-5-episode-17-online-free-putlocker.html<br />(I'm not responsible for any viruses you might get from this site!<br />But the video works. ;-> ).jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-14266184714707370032016-08-09T15:36:45.082-07:002016-08-09T15:36:45.082-07:00> Every time you are asked to trust a robot or ...> Every time you are asked to trust a robot or algorithm. . .<br /><br />It's not likely you'll be **asked** to trust anything. For example, employees<br />of the organizations using that "Scout" program from "cybersecurity"<br />firm Stroz Friedberg (that's supposed to identify disgruntled<br />employees by filtering e-mail according to a "psycholinguistics"<br />algorithm) certainly aren't going to be **asked** if they're willing<br />to submit to that evaluation by software! In fact, they won't even<br />know their employer is using it (the list of organizations using<br />that software is kept secret). They'll be **told** that they have to sign,<br />as a condition of employment, a contract that stiplates that workplace<br />computers do not belong to them, and that anything they do with the computer<br />is subject to monitoring, blah blah blah. Same thing with the software<br />that the three-letter agencies are using to monitor everybody's<br />communications (whether it's the NSA's "Echelon" or the FBI's<br />"Carnivore" or whatever the current incarnations of those things might<br />be). Same thing with the software used to determine who might be<br />a security risk, or who should be on the "no-fly" list. Same thing<br />with the software used to determine your credit rating. The software<br />itself is, in all cases, classified (or at the very least proprietary),<br />and you, the individual, certainly don't get to know what algorithm<br />is being used (or even **if** an algorithm is being used).<br /><br />This will continue for the foreseeable, and continue to escalate.<br />There **will** be ubiquitous cameras in public places within a few<br />decades, and if the kind of software Microsoft has described (to read<br />people facial expressions, or emotional states, or violent intentions)<br />is used to filter that visual data, then you won't ever be "asked<br />to trust" that, either.<br /><br />There may be future whistle-blowers, a la Snowden; there may be court<br />cases and lawsuits from people alleging they were discriminated against<br />or fired or passed over for promotion, or denied credit, or denied<br />permission to travel, or harassed by the police, on the basis of what<br />they think might be "AI". Those cases will be hard to prove, and<br />they'll be fighting against a political headwind (I don't see the<br />"war on terror" ending anytime within my remaining lifetime).<br /><br />So it goes.jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.com