I’m very dubious that we’ll solve alignment in time, and it seems like my marginal dollar would do better in non-obvious causes for AI safety. So I’m very open to funding something like this in the hope we get a AI winter / regulatory pause etc.
I don’t know if you or anyone else has thought about this, but what is your take on whether this or WBE is the more likely chance to getting done successfully? WBE seems a lot more funding intensive, but also possible to measure progress easier and potentially less regulatory burdens?
I’m very dubious that we’ll solve alignment in time, and it seems like my marginal dollar would do better in non-obvious causes for AI safety. So I’m very open to funding something like this in the hope we get a AI winter / regulatory pause etc.
I don’t know if you or anyone else has thought about this, but what is your take on whether this or WBE is the more likely chance to getting done successfully? WBE seems a lot more funding intensive, but also possible to measure progress easier and potentially less regulatory burdens?
I discuss this here: https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods#Brain_emulation
You can see my comparisons of different methods in the tables at the top: