Yes, I’m actually rereading your excellent computational complexity textbook currently!
I have been thinking about to what extent the agent foundations communities’ goal of understanding computational uncertainty (including Vingean reflection) is “hard” or even “complete” for much of computational complexity theory (perhaps also in the literal sense of containing complete problems for well-studied complexity classes), and therefore perhaps far too ambitious to expect a “solution” before AGI. I wonder if you have thoughts on this.
One direction I’ve been exploring recently is a computationally unbounded theory of embedded agency—which tries to avoid needing to talk about computational complexity, but this may not capture the important problems of self-trust needed for alignment.
Anyway, there’s no rigorous empirical science without some kind of theory—I know you didn’t try to teach cryptography in a purely empirical way, and both subjects share an adversarial nature :)
Yes, I’m actually rereading your excellent computational complexity textbook currently!
I have been thinking about to what extent the agent foundations communities’ goal of understanding computational uncertainty (including Vingean reflection) is “hard” or even “complete” for much of computational complexity theory (perhaps also in the literal sense of containing complete problems for well-studied complexity classes), and therefore perhaps far too ambitious to expect a “solution” before AGI. I wonder if you have thoughts on this.
One direction I’ve been exploring recently is a computationally unbounded theory of embedded agency—which tries to avoid needing to talk about computational complexity, but this may not capture the important problems of self-trust needed for alignment.
Anyway, there’s no rigorous empirical science without some kind of theory—I know you didn’t try to teach cryptography in a purely empirical way, and both subjects share an adversarial nature :)