I admitted that it’s possible the problem is practically unsolvable, or worse; you could have put the entire world on Russell and Whitehead’s goal of systematizing math, and you might have gotten to Gödel faster, but you’d probably just waste more time.
And on Scott’s contributions, I think they are solving or contributing towards solving parts of the problems that were posited initially as critical to alignment, and I haven’t seen anyone do more. (With the possible exception of Paul Christiano, who hasn’t been focusing on research for solving alignment as much recently.) I agree that the work doesn’t don’t do much other than establish better foundations, but that’s kind-of the point. (And it’s not just Logical induction—there’s his collaboration on Embedded Agency, and his work on finite factored sets.) But the fact that the work done to establish the base for the work is more philosophical and doesn’t align AGI seems like it is moving the goalposts, even if I agree it’s true.
I admitted that it’s possible the problem is practically unsolvable, or worse; you could have put the entire world on Russell and Whitehead’s goal of systematizing math, and you might have gotten to Gödel faster, but you’d probably just waste more time.
And on Scott’s contributions, I think they are solving or contributing towards solving parts of the problems that were posited initially as critical to alignment, and I haven’t seen anyone do more. (With the possible exception of Paul Christiano, who hasn’t been focusing on research for solving alignment as much recently.) I agree that the work doesn’t don’t do much other than establish better foundations, but that’s kind-of the point. (And it’s not just Logical induction—there’s his collaboration on Embedded Agency, and his work on finite factored sets.) But the fact that the work done to establish the base for the work is more philosophical and doesn’t align AGI seems like it is moving the goalposts, even if I agree it’s true.