An AGI is an extremely complex entity. You don’t get to decide arbitrarily how to make it. If nothing else, there are fundamental computational limits on Bayesian inference that are not even well-understood yet. So if you were planning to make your FAI a Bayesian then you should probably at least be somewhat familiar with these issues, and of course working towards their resolution will help you better understand your constraints. I personally strongly suspect there are also fundamental computational limits on utility maximization, so if you were planning on making your FAI a utility maximizer then again this is probably a good thing to study. Maybe you don’t consider this AGI research but the main approach to AGI that I consider feasible would benefit at least somewhat from such understanding.
In my opinion, provably friendly AI is hopeless to get to before someone else gets to AGI. The best thing one can hope for is (i) brain uploads come first, or (ii) a fairly transparent AGI design coupled with a good understanding of meta-ethics. This means that as far as I can see, if you want to reduce x-risk from UFAI then you should be doing one of the following:
working towards brain uploads to make sure they come first
working on the statistical approach to AI to make sure it gets to AGI before the connectionist approach (and developing software tools to help us better understand the statistical algorithms we write)
working on something like lukeprog’s program of metaethics (this is probably the best of the three)
An AGI is an extremely complex entity. You don’t get to decide arbitrarily how to make it. If nothing else, there are fundamental computational limits on Bayesian inference that are not even well-understood yet. So if you were planning to make your FAI a Bayesian then you should probably at least be somewhat familiar with these issues, and of course working towards their resolution will help you better understand your constraints. I personally strongly suspect there are also fundamental computational limits on utility maximization, so if you were planning on making your FAI a utility maximizer then again this is probably a good thing to study. Maybe you don’t consider this AGI research but the main approach to AGI that I consider feasible would benefit at least somewhat from such understanding.
In my opinion, provably friendly AI is hopeless to get to before someone else gets to AGI. The best thing one can hope for is (i) brain uploads come first, or (ii) a fairly transparent AGI design coupled with a good understanding of meta-ethics. This means that as far as I can see, if you want to reduce x-risk from UFAI then you should be doing one of the following:
working towards brain uploads to make sure they come first
working on the statistical approach to AI to make sure it gets to AGI before the connectionist approach (and developing software tools to help us better understand the statistical algorithms we write)
working on something like lukeprog’s program of metaethics (this is probably the best of the three)