At EA Global 2014, and (I think) other EA Globals, CFAR has a) been present, b) specifically talked about a goal/plan, broken down as follows:
– The world has big problems, and needs people who are smart, capable, rational, and altruistic (or at least motivated to solve those problems for other reasons)
– CFAR has limited number of people they can teach
– People tend to rub off on each other when they hang out with each other
– People vary in how rational, altruistic, and capable they are.
– So, CFAR seeks out people who have SOME combination of high rationality, altruism, and competence. They run workshops with all those people, and one of their hopes is that the rationality/altruism/competence will rub off on each other.
So it is not new that CFAR has (at least) a subgoal of “create people capable of solving the world’s problems, with the motivation to do so.” This may not have been well publicized either, for good or for ill.
I think this was a worthy goal, and the correct one for them to focus on given their limited resources.
So the new AI announcement is basically them saying “we are refining this a step further, to optimize for AI Risk in particular.”
(Whether you think that is good or bad depends on a lot of things)
-
[Epistemic Effort: I noticed myself making a vague statement about CFAR saying this every year, and then realized I only actually had one distinct memory of it, and updated the statement to be more-accurate-given-my-memories]
One comment of mine, cross-posted from Ozy’s Blog
Things worth noting people may not know:
At EA Global 2014, and (I think) other EA Globals, CFAR has a) been present, b) specifically talked about a goal/plan, broken down as follows:
– The world has big problems, and needs people who are smart, capable, rational, and altruistic (or at least motivated to solve those problems for other reasons) – CFAR has limited number of people they can teach – People tend to rub off on each other when they hang out with each other – People vary in how rational, altruistic, and capable they are. – So, CFAR seeks out people who have SOME combination of high rationality, altruism, and competence. They run workshops with all those people, and one of their hopes is that the rationality/altruism/competence will rub off on each other.
So it is not new that CFAR has (at least) a subgoal of “create people capable of solving the world’s problems, with the motivation to do so.” This may not have been well publicized either, for good or for ill.
I think this was a worthy goal, and the correct one for them to focus on given their limited resources.
So the new AI announcement is basically them saying “we are refining this a step further, to optimize for AI Risk in particular.”
(Whether you think that is good or bad depends on a lot of things)
-
[Epistemic Effort: I noticed myself making a vague statement about CFAR saying this every year, and then realized I only actually had one distinct memory of it, and updated the statement to be more-accurate-given-my-memories]