I think the reason this works is that the AI doesn’t need to deeply understand in order to make a nice summary. It can just put some words together and my high context with the world will make the necessary connections and interpretations, even if further questioning the AI would lead it to wrong interpretations. For example it’s efficient at summarizing decision theory papers, even thought it’s generally bad at reasoning through it
hahah yes we had ground truth
I think the reason this works is that the AI doesn’t need to deeply understand in order to make a nice summary. It can just put some words together and my high context with the world will make the necessary connections and interpretations, even if further questioning the AI would lead it to wrong interpretations. For example it’s efficient at summarizing decision theory papers, even thought it’s generally bad at reasoning through it