I think you’ll find it useful regardless of how much in relates to MIRI’s program: epistemology is foundational and having a better understanding of it is wildly useful if you have an interest in anything that comes remotely close to touching philosophical questions. In fact, my own take on most existing AI safety research is that it doesn’t enough address issues related to foundational questions of epistemology by choosing to make certain implicit, strong assumptions about the discoverability of truth and as a result you can add a lot of value by more carefully questioning how we know what we think we know as it relates to solving AI safety issues.
I think you’ll find it useful regardless of how much in relates to MIRI’s program: epistemology is foundational and having a better understanding of it is wildly useful if you have an interest in anything that comes remotely close to touching philosophical questions. In fact, my own take on most existing AI safety research is that it doesn’t enough address issues related to foundational questions of epistemology by choosing to make certain implicit, strong assumptions about the discoverability of truth and as a result you can add a lot of value by more carefully questioning how we know what we think we know as it relates to solving AI safety issues.