Let’s suppose we succeed in aligning a super-intelligence. We should expect that the super-intelligence will be able to provide a pretty good estimate of how impactful various people’s actions were. So maybe there are some people toiling away on AI Safety who feel sad that their efforts aren’t being recognised. I guess what I’m saying is that if we succeed you will be. I’m hoping that at least some people will find this encouraging.
AI Alignment and Recognition
Let’s suppose we succeed in aligning a super-intelligence. We should expect that the super-intelligence will be able to provide a pretty good estimate of how impactful various people’s actions were. So maybe there are some people toiling away on AI Safety who feel sad that their efforts aren’t being recognised. I guess what I’m saying is that if we succeed you will be. I’m hoping that at least some people will find this encouraging.