OK, it seems like I misinterpreted your comment on philosophy. But in this post you seem to be saying that we might not need to solve philosophical problems related to epistemology and agency?
That concept also seems useful and different from autopoiesis as I understand it (since it requires continual human cognitive work to run, though not very much).
I think that we can avoid coming up with a good decision theory or priors or so on—there are particular reasons that we might have had to solve philosophical problems, which I think we can dodge. But I agree that we need or want to solve some philosophical problems to align AGI (e.g. defining corrigibility precisely is a philosophical problem).
That makes sense.
OK, it seems like I misinterpreted your comment on philosophy. But in this post you seem to be saying that we might not need to solve philosophical problems related to epistemology and agency?
That concept also seems useful and different from autopoiesis as I understand it (since it requires continual human cognitive work to run, though not very much).
I think that we can avoid coming up with a good decision theory or priors or so on—there are particular reasons that we might have had to solve philosophical problems, which I think we can dodge. But I agree that we need or want to solve some philosophical problems to align AGI (e.g. defining corrigibility precisely is a philosophical problem).