What I can offer others: Some people say I give good advice. I have a broad shallow knowledge of lots of things; If you’re thinking “Has anyone looked at X yet?” or “There’s no resources for problem Y!”, chances are good I’ve already bookmarked something about the exact thing.
What I hope to gain: I’m most interested in any or all of the following topics:
How precisely do human values/metaethics need to be encoded, rather than learned-later, to result in a non-doom-causing AGI? Can this be formalized/measured? (Context)
How “smart” (experienced, talented, working-memory, already-up-to-speed) do I need to be about ML and/or maths, personally, to actually help with technical AI alignment? (Could discuss this question for either or both of the 3 main “assumption clusters”, namely “QACI/agent-foundations/pessimism VS prosaic-alignment/LLMs/interpretability/whatever Quintin Pope is working on VS both sides (Wentworth(?))”.)
How deep in weirdness-points debt am I, already? Does this “block” me from doing anything actually helpful w.r.t. AI governance? (Basically the governance-version of the above topic.)
Another version of the above 2 questions, but related to my personality/”intensity”.
I’m up for a dialogue!
What I can offer others: Some people say I give good advice. I have a broad shallow knowledge of lots of things; If you’re thinking “Has anyone looked at X yet?” or “There’s no resources for problem Y!”, chances are good I’ve already bookmarked something about the exact thing.
What I hope to gain: I’m most interested in any or all of the following topics:
How precisely do human values/metaethics need to be encoded, rather than learned-later, to result in a non-doom-causing AGI? Can this be formalized/measured? (Context)
How “smart” (experienced, talented, working-memory, already-up-to-speed) do I need to be about ML and/or maths, personally, to actually help with technical AI alignment? (Could discuss this question for either or both of the 3 main “assumption clusters”, namely “QACI/agent-foundations/pessimism VS prosaic-alignment/LLMs/interpretability/whatever Quintin Pope is working on VS both sides (Wentworth(?))”.)
How deep in weirdness-points debt am I, already? Does this “block” me from doing anything actually helpful w.r.t. AI governance? (Basically the governance-version of the above topic.)
Another version of the above 2 questions, but related to my personality/”intensity”.
Intelligence enhancement!