I saw this message without context in my mail box and thought to write that this was an unsolved problem[1], that things that simply are not true can’t stand up very well in a world model, but this seems like something an intelligent human like Amodei or Musk should be able to do. A 99% “probability” (guess by a human) on ¬ai_doom should not be able to fix enough detail to directly contradict reasoning on the counterlogical/counterfactual where doom instead happens. Any failure to carry out this reasoning task seems like a simple failure of reasoning in logic and EUM, not an encounter with a hard (unsolved) decision theory/counterlogical reasoning problem.
At a human level of intelligence, the level of trapped priors required to get yourself into an actual unsolved problem in the context of predicting future AI developments seems to be passed the point where you would claim to have a good guess on the doom-causing AI’s name and well on the way to describing the Vingean reflection process of the antepenultimate ASI on priors alone.
I saw this message without context in my mail box and thought to write that this was an unsolved problem[1], that things that simply are not true can’t stand up very well in a world model, but this seems like something an intelligent human like Amodei or Musk should be able to do. A 99% “probability” (guess by a human) on ¬
ai_doomshould not be able to fix enough detail to directly contradict reasoning on the counterlogical/counterfactual where doom instead happens. Any failure to carry out this reasoning task seems like a simple failure of reasoning in logic and EUM, not an encounter with a hard (unsolved) decision theory/counterlogical reasoning problem.At a human level of intelligence, the level of trapped priors required to get yourself into an actual unsolved problem in the context of predicting future AI developments seems to be passed the point where you would claim to have a good guess on the doom-causing AI’s name and well on the way to describing the Vingean reflection process of the antepenultimate ASI on priors alone.
https://www.lesswrong.com/posts/wXbSAKu2AcohaK2Gt/udt-shows-that-decision-theory-is-more-puzzling-than-ever?commentId=xdWttBZThtkyKj9Ts “PIBBSS Final Report: Logically Updateless Decision Making” footnote 12