[Epistemic Effort: noticed that I was referring to “CFAR’s thought Process”, which was sort of obfuscating details. I think I have a good model of Anna’s thought process. I don’t have a good model of most other CFAR staff nor how CFAR as a unit makes decisions. I got more specific.]
The problem with deworming as an example is it’s really hard for me to imagine that cause a) being the most important cause, b) being urgent in the way that existential risk from AI is urgent.
I don’t think Anna’s thought process was, after founding CFAR, decided “We should use our program to help the most important cause”, and then AI Risk was the most important cause. And I think an ideological-turing-test approach to examining the CFAR decision needs to include a deeper level of understanding/empathy before one can judge it.
My understanding of Anna (based on some conversations with her that I’m pretty sure are considered public, but hope she’ll correct me if I misconstrue anything), is that, from day 1, her process was something like:
[Note: periodically I switch from ‘things I’m fairly confident about the thought process’ to ‘speculation on my party’ and I try to distinguish that where I notice it]
1) (long time ago) - “I want to help the world. What are the ways I might do that?” (tries lots of things, including fairly mainstream altruism things)
2) Ends up thinking about existential risk and AI Risk in particular, crunches numbers, comes to believe this is the most important thing to work on. But not just “this seems like the most important issue among several possible issues”. It’s “holy shit, this is mind bogglingly important, and nobody is working on this at all. This problem is incredibly confusing. And it looks like the default course of history includes a frighteningly high chance that humanity will just be extinguished in the next century.”
3) Starts examining the problem [note: some of my own speculation here], and notices that a) the problem is extremely challenging to think about. It is difficult in ways that less_wrong_2016 has mostly gotten over (scope insensitivity, weirdness, etc), but continues to be difficult in ways that present_day_cfar/less_wrong continue to find challenging, because building an AI is really hard.
We have very little idea what the architecture of an AI will look like, very little idea of both how to design an AI to be “rational” in a way that doesn’t get us killed, and (relatively) little idea of how to interact politically with the various people/orgs who may be relevant to AI safety. At every step in the journey, we have very limited evidence and our present day rationality is not good enough to solve the problem.
4) Eliezer founds Less Wrong, largely in attempt to solve the above problems. Importantly, he has a clearly defined ulterior motive (solving AI Risk), while also earnestly believing in the general cause of rationality and hoping it benefits the world in other ways. (http://lesswrong.com/lw/d6/the_end_of_sequences/)
5) Less Wrong isn’t sufficient to solve the above problems. MIRI (then SingInst) begins running various projects to improve rationality.
6) Those projects aren’t sufficient / take up too much organizational focus from MIRI. CFAR is spun off, headed by Anna. Like Less Wrong, she has a clear ulterior motive in mind while also earnestly believing in rationality as a generally valuable thing for the world. Her express purpose in created CFAR is to build a tool that is necessary to solve a problem because the future is on fire and it needs putting out. (My understanding is that, possibly due to failure to communicate, plausibly due to some monkey-brain-subconscious-or-semi-conscious slytherining, other founders like Julia and Val join CFAR with the expectation it is cause neutral.)
7) CFAR makes significant improves in its ability to help people improve their lives—but continues to struggle to build its epistemic rationality curriculum, making decisions based on limited information.
[More speculation on my part] - I think part of the reason CFAR struggled to develop an epistemic rationality curriculum is because epistemic rationality isn’t that relevant to most people. To develop it, you need concrete projects to work on that actually benefit from probability theory and sifting through mediocre evidence.
So, CFAR is failing to achieve both Anna’s original motivation for it, AND one of its more overt goals (that folk like Julia whole heartedly share). So, it begins attempting AI-focused workshops. I do not currently know the results of those workshops.
8) Here, I stop having a clear understanding of the situation, but something to the effect of “AI workshops were not sufficient to push the development of rationality to a level that’d be sufficient to succeed at AI safety—it required some overall shifts in organizational focus.” (As Satvik noted somewhere, I think unfortunately on facebook, organizational focus gives you clearer guidelines of when to pursue new opportunities and when to say no to things.”
...
So… I think it’s reasonable to look at all that and disagree with the outcome. I applaud Ozy for trying to think through the situation in an ideological-turing-test sort of way. But I think to really fairly critique this it’s necessary to include not just use something like “Givewell spins off a rationality org, which then ends off deciding it makes most sense to focus that rationality on deworming.” I’m not sure I can think of a good example that really captures it.
My understanding/guesses of Anna’s cruxes (perhaps more honestly: my own cruxes, informed by things I’ve heard both her and Eliezer say, are)
a) AI grade rationality is urgent b) while an important aspect of rationality is managing your filter bubble (and, consequently, your public image so that new people can be attracted to your filter bubble to cross pollinate), it is also an aspect of rationality that you can make more progress on an idea by specializing it and getting into the nitty gritty details as much as possible. c) AI grade rationality will benefit more from gritty-details work than preserving a broader filter bubble.
As well as the point Ozy addresses, which is that to the extent this was CFAR’s existing goal, it is better for them to be honest about it.
(I do think it’d be an incredibly good thing if there were also a truly neutral organization that helps people pursue their own goals, that develops a generally applicable art of rationality, that eventually raises the overall sanity waterline. I think it is deeply sad that CFAR is not that, but that is just one in a large number of deeply sad things the world lacks, and hopefully we’ll eventually have the resources to do all of them)
[Epistemic Effort: noticed that I was referring to “CFAR’s thought Process”, which was sort of obfuscating details. I think I have a good model of Anna’s thought process. I don’t have a good model of most other CFAR staff nor how CFAR as a unit makes decisions. I got more specific.]
The problem with deworming as an example is it’s really hard for me to imagine that cause a) being the most important cause, b) being urgent in the way that existential risk from AI is urgent.
I don’t think Anna’s thought process was, after founding CFAR, decided “We should use our program to help the most important cause”, and then AI Risk was the most important cause. And I think an ideological-turing-test approach to examining the CFAR decision needs to include a deeper level of understanding/empathy before one can judge it.
My understanding of Anna (based on some conversations with her that I’m pretty sure are considered public, but hope she’ll correct me if I misconstrue anything), is that, from day 1, her process was something like:
[Note: periodically I switch from ‘things I’m fairly confident about the thought process’ to ‘speculation on my party’ and I try to distinguish that where I notice it]
1) (long time ago) - “I want to help the world. What are the ways I might do that?” (tries lots of things, including fairly mainstream altruism things)
2) Ends up thinking about existential risk and AI Risk in particular, crunches numbers, comes to believe this is the most important thing to work on. But not just “this seems like the most important issue among several possible issues”. It’s “holy shit, this is mind bogglingly important, and nobody is working on this at all. This problem is incredibly confusing. And it looks like the default course of history includes a frighteningly high chance that humanity will just be extinguished in the next century.”
3) Starts examining the problem [note: some of my own speculation here], and notices that a) the problem is extremely challenging to think about. It is difficult in ways that less_wrong_2016 has mostly gotten over (scope insensitivity, weirdness, etc), but continues to be difficult in ways that present_day_cfar/less_wrong continue to find challenging, because building an AI is really hard.
We have very little idea what the architecture of an AI will look like, very little idea of both how to design an AI to be “rational” in a way that doesn’t get us killed, and (relatively) little idea of how to interact politically with the various people/orgs who may be relevant to AI safety. At every step in the journey, we have very limited evidence and our present day rationality is not good enough to solve the problem.
4) Eliezer founds Less Wrong, largely in attempt to solve the above problems. Importantly, he has a clearly defined ulterior motive (solving AI Risk), while also earnestly believing in the general cause of rationality and hoping it benefits the world in other ways. (http://lesswrong.com/lw/d6/the_end_of_sequences/)
5) Less Wrong isn’t sufficient to solve the above problems. MIRI (then SingInst) begins running various projects to improve rationality.
6) Those projects aren’t sufficient / take up too much organizational focus from MIRI. CFAR is spun off, headed by Anna. Like Less Wrong, she has a clear ulterior motive in mind while also earnestly believing in rationality as a generally valuable thing for the world. Her express purpose in created CFAR is to build a tool that is necessary to solve a problem because the future is on fire and it needs putting out. (My understanding is that, possibly due to failure to communicate, plausibly due to some monkey-brain-subconscious-or-semi-conscious slytherining, other founders like Julia and Val join CFAR with the expectation it is cause neutral.)
7) CFAR makes significant improves in its ability to help people improve their lives—but continues to struggle to build its epistemic rationality curriculum, making decisions based on limited information.
[More speculation on my part] - I think part of the reason CFAR struggled to develop an epistemic rationality curriculum is because epistemic rationality isn’t that relevant to most people. To develop it, you need concrete projects to work on that actually benefit from probability theory and sifting through mediocre evidence.
So, CFAR is failing to achieve both Anna’s original motivation for it, AND one of its more overt goals (that folk like Julia whole heartedly share). So, it begins attempting AI-focused workshops. I do not currently know the results of those workshops.
8) Here, I stop having a clear understanding of the situation, but something to the effect of “AI workshops were not sufficient to push the development of rationality to a level that’d be sufficient to succeed at AI safety—it required some overall shifts in organizational focus.” (As Satvik noted somewhere, I think unfortunately on facebook, organizational focus gives you clearer guidelines of when to pursue new opportunities and when to say no to things.”
...
So… I think it’s reasonable to look at all that and disagree with the outcome. I applaud Ozy for trying to think through the situation in an ideological-turing-test sort of way. But I think to really fairly critique this it’s necessary to include not just use something like “Givewell spins off a rationality org, which then ends off deciding it makes most sense to focus that rationality on deworming.” I’m not sure I can think of a good example that really captures it.
My understanding/guesses of Anna’s cruxes (perhaps more honestly: my own cruxes, informed by things I’ve heard both her and Eliezer say, are)
a) AI grade rationality is urgent
b) while an important aspect of rationality is managing your filter bubble (and, consequently, your public image so that new people can be attracted to your filter bubble to cross pollinate), it is also an aspect of rationality that you can make more progress on an idea by specializing it and getting into the nitty gritty details as much as possible.
c) AI grade rationality will benefit more from gritty-details work than preserving a broader filter bubble.
As well as the point Ozy addresses, which is that to the extent this was CFAR’s existing goal, it is better for them to be honest about it.
(I do think it’d be an incredibly good thing if there were also a truly neutral organization that helps people pursue their own goals, that develops a generally applicable art of rationality, that eventually raises the overall sanity waterline. I think it is deeply sad that CFAR is not that, but that is just one in a large number of deeply sad things the world lacks, and hopefully we’ll eventually have the resources to do all of them)