Yes, exactly. For what it’s worth, what you’re getting at in this post is roughly why I wrote Fundamental Uncertainty (or am still writing it, as the final version is still under revision), where I try to argue that epistemic uncertainty matters a lot and is pervasive and unavoidable, and therefore causes problems when you try to build AI that’s aligned. In the book I don’t spend much time on AI, but I wrote the book because when I was working on AI alignment I saw how much this issue mattered so I set out to convince others it’s important. My hope is once the book is published to have time to focus more on the AI side of things, able to use the book as a useful referent for loading up the worldview where uncertainty is foundational (which seems surprisingly hard to do for a bunch of reasons).
Yes, exactly. For what it’s worth, what you’re getting at in this post is roughly why I wrote Fundamental Uncertainty (or am still writing it, as the final version is still under revision), where I try to argue that epistemic uncertainty matters a lot and is pervasive and unavoidable, and therefore causes problems when you try to build AI that’s aligned. In the book I don’t spend much time on AI, but I wrote the book because when I was working on AI alignment I saw how much this issue mattered so I set out to convince others it’s important. My hope is once the book is published to have time to focus more on the AI side of things, able to use the book as a useful referent for loading up the worldview where uncertainty is foundational (which seems surprisingly hard to do for a bunch of reasons).