If funding were available, the Centre for Effective Altruism would consider hiring someone to work closely with Prof Nick Bostrom to provide anything and everything he needs to be more productive. Bostrom is obviously the Director of the Future of Humanity Institute at Oxford University, and author of Superintelligence, the best guide yet to the possible risks posed by artificial intelligence.
Nobody has yet confirmed they will fund this role, but we are nevertheless interested in getting expressions of interest from suitable candidates.
The list of required characteristics is hefty, and the position would be a challenging one:Pondering the "tasks that are not high status" required of this paid helpmate, Jim commented, "Maybe he needs somebody (as Kurzweil is said to employ someone) to count out his daily doses of life-extending vitamin pills... Or give him nootropic foot massages. God only knows."
The research Bostrom can do is unique; to my knowledge we don't have anyone who has made such significant strides clarifying the biggest risks facing humanity as a whole. As a result, helping increase Bostrom's output by say, 20%, would be a major contribution. This person's work would also help the rest of the Future of Humanity Institute run smoothly.
- Willing to commit to the role for at least a year, and preferably several
- Able to live and work in Oxford during this time
- Conscientious and discreet
- Able to keep flexible hours (some days a lot of work, others not much)
- Highly competent at almost everything in life (for example, organising travel, media appearances, choosing good products, and so on)
- Will not screw up and look bad when dealing with external parties (e.g. media, event organisers, the university)
- Has a good personality 'fit' with Bostrom
- Willing to do some tasks that are not high-status
- Willing to help Bostrom with both his professional and personal life (to free up his attention)
- Can speak English well
- Knowledge of rationality, philosophy and artificial intelligence would also be helpful, and would allow you to also do more work as a research assistant.
As an academic I am quite familiar with the phenomenon of graduate students with research positions for professorial muckety-mucks who sift through their e-fanmail, walk their dogs, proofread their scrawls, get their coffee orders just so and so on, and as somebody who lives in a California metropolitan area I am no less familiar with PAs following celebrity-CEOs around like serfs on speed-dial, permanently at-the-ready for ego (to say the least) fluffing, so I guess I don't find that part of the proposal utterly illegible -- although the phenomenon rather grosses me out as a general matter.
Of course, there is a rich vein of humor to be mined in the baldly repetitious pleas for cash here, the corn-ball con-job of declaring Bostrom's "significant strides clarifying the biggest risks facing humanity" by which is meant Bostrom's distraction of attention from real problems of anthropogenic climate change, human trafficking and precarization, neglected treatable diseases and basic infrastructure and social support failures in overexploited regions and populations, weapons proliferation, and so on to focus instead of futurological fancies like robot armies, nanobotic plagues, and devilish superintelligent post-biological Robot Gods (Bostrom's, er, "specialty" these days).
I've been reading and engaging with Robot Cultists for over a quarter of a century at this point and I still gasp at the flabbergasting self-congratulatory assignment of terms like "rationality" to describe such recklessly unwarranted wish-fulfillment fantasizing, of terms like "philosophy" to describe fanboy flamewars over stipulated properties of imaginary objects unmoored from reality (except as symptoms for their psychotherapists to puzzle over), and of phrases like "effective altruism" to describe the fleecing of technoscientific illiterates by guru-wannabes who never actually make anything but pitches for more dough.
But above all I guess what I find most puzzling about this proposal is that Bostrom is supposed to be one of the Robot Cult's most legitimate, high-profile academics. He is widely published and comparatively widely-read. He is affiliated with Oxford University, and so on. Doesn't he already have research assistants getting his dry-cleaning and organizing his mail? Bostrom hob-nobs with big wigs in the corporate-military think-tank archipelago these days. Surely he's got billionaires like those Koch Brothers of reactionary futurology Peter Thiel and Elon Musk on his rolodex. Kurzweil's cooling his heels over at Google. Doesn't Bostrom have sugar daddies who can get somebody to put sugar in his coffee already? Heck, Martine Rothblatt is another one of the fellow-faithful, although her money bags are more at the disposal of a different sect of the Robot Cult, the cyberangel avatar in Holodeck Heaven sub-sect of the techno-immortalist sub-sect of the transhumanist "movement."
Is this plea to get Bostrom a gofer just an embarrassing crass scam for cash on the part of the Centre for Effective [sic] Altruism and the Less Wrong throng? Is the robocultic mad scientist masters of the universe schtick these futurological eminences like to play out actually as marginal an enterprise as it deserves to be, leaving Bostrom's Believers to work on a shoestring despite all those corporate logos and well-heeled institutional contacts they flog? Are Eliezer Yudkowsky's man minions worried that they are falling out of the futurological fraud loop and looking to get a sect-friendly libertechbrotarian inside man into Bostrom's lofty perch? As I said, Kurzweil is peddling his vaporware for Google now, the Singularity has terminologically transferred from Eliezer's fanboy circle-jerk to the venture-capitalists of Singularity University, and Bostrom's record of distancing himself from the futurological faithful by founding first the conspicuously cultic World Transhumanist Association and then next the stealth transhumanist cultic Institute for Ethics and Emerging Technologies and now the thoroughly mainstreamed and fumigated Oxford Future of Humanity Institute (never changing his assumptions, aspirations, methods, or canon very much along the way) can't be inspiring confidence. Perhaps this is just a clumsy dash for an open seat before the music stops: perhaps the Singularity isn't the black hole some Robot Cultists are contemplating at the moment.