Obviously the legibility of problems is not only a problem in AI safety. For example if I was working on ageing fulltime, I think most of the alpha would be in figuring out which root cause of aging is in fact correct and thinking of the easiest experiments (like immortal yeast) to make this legible to all the other biologists as fast as possible. I wouldn’t try to come up with a drug at all.
Obviously the legibility of problems is not only a problem in AI safety. For example if I was working on ageing fulltime, I think most of the alpha would be in figuring out which root cause of aging is in fact correct and thinking of the easiest experiments (like immortal yeast) to make this legible to all the other biologists as fast as possible. I wouldn’t try to come up with a drug at all.