I appreciate you for writing that article. Humans seem bad at choosing what to work on. Is there a sub-field in AI alignment where a group of researchers solely focus on finding the most relevant questions to work on, make a list, and others pick from that list?
• • •
(I don’t think this is an original take)
Man, I genuinely think you make me smarter and better at my goals. I love reading from you. I appreciate you for writing this.
I notice how easily I do “let’s go right into thinking about this question”/”let’s go right into doing this thing” instead of first asking “is this relevant? How do I know that? How does it help my goals better than something else?” and then “drop if not relevant”.
In (one of) Eliezer’s doom article(s), he says that he needs to literally stand behind people while they work for them to do anything sensible (misquote, and maybe hyperbolic on his end). From that, I judge that people in AI (and me, and probably people generally), even very rigorous people when it comes to execution, do NOT have sufficient rigor when it comes to choosing which task to do.
From what EY says, from what you say, I make the judgment that AI researchers (and people in general, and me) choose tasks more in terms of “what has keywords in common with my goals” + “what sounds cool” + “what sounds reasonable” + “what excites me”, which is definitely NOT the same as “what optimizes my goals given my resources”, and there aren’t many excuses for doing that.
Except maybe: this is a bias that we realized humans have. You didn’t know. Now that you know, stop doing it.
(I understand that researchers walk a difficult line where they may ALSO need to optimize for “projects that will get funded”, which may involve “sounds cool to grant-givers”. But the point still holds I believe.)
There may be the additional problem of humility whereby people assume that already-selected problems must be relevant “because my smart colleagues wouldn’t be working on them if not”, instead of just having it be the policy that reasons for doing projects are known and challengeable.
TLDR+question:
I appreciate you for writing that article. Humans seem bad at choosing what to work on. Is there a sub-field in AI alignment where a group of researchers solely focus on finding the most relevant questions to work on, make a list, and others pick from that list?
• • •
(I don’t think this is an original take)
Man, I genuinely think you make me smarter and better at my goals. I love reading from you. I appreciate you for writing this.
I notice how easily I do “let’s go right into thinking about this question”/”let’s go right into doing this thing” instead of first asking “is this relevant? How do I know that? How does it help my goals better than something else?” and then “drop if not relevant”.
In (one of) Eliezer’s doom article(s), he says that he needs to literally stand behind people while they work for them to do anything sensible (misquote, and maybe hyperbolic on his end). From that, I judge that people in AI (and me, and probably people generally), even very rigorous people when it comes to execution, do NOT have sufficient rigor when it comes to choosing which task to do.
From what EY says, from what you say, I make the judgment that AI researchers (and people in general, and me) choose tasks more in terms of “what has keywords in common with my goals” + “what sounds cool” + “what sounds reasonable” + “what excites me”, which is definitely NOT the same as “what optimizes my goals given my resources”, and there aren’t many excuses for doing that.
Except maybe: this is a bias that we realized humans have. You didn’t know. Now that you know, stop doing it.
(I understand that researchers walk a difficult line where they may ALSO need to optimize for “projects that will get funded”, which may involve “sounds cool to grant-givers”. But the point still holds I believe.)
There may be the additional problem of humility whereby people assume that already-selected problems must be relevant “because my smart colleagues wouldn’t be working on them if not”, instead of just having it be the policy that reasons for doing projects are known and challengeable.