This post gives a pretty short proof, and my main takeaway is that intelligence and consciousness converges to look-up tables which are infinitely complicated, so as to deal with every possible situation:
https://www.lesswrong.com/posts/2LvMxknC8g9Aq3S5j/ldt-and-everything-else-can-be-irrational
I agree with this implication for optimization:
https://www.lesswrong.com/posts/yTvBSFrXhZfL8vr5a/worst-case-thinking-in-ai-alignment#N3avtTM3ESH4KHmfN
This post gives a pretty short proof, and my main takeaway is that intelligence and consciousness converges to look-up tables which are infinitely complicated, so as to deal with every possible situation:
https://www.lesswrong.com/posts/2LvMxknC8g9Aq3S5j/ldt-and-everything-else-can-be-irrational
I agree with this implication for optimization:
https://www.lesswrong.com/posts/yTvBSFrXhZfL8vr5a/worst-case-thinking-in-ai-alignment#N3avtTM3ESH4KHmfN