So if there are different poly fragments that the human would evaluate differently, is ELK just “giving them a fragment such that they come to the correct conclusion” even if the fragment might not be the right piece.
E.g. in the SmartVault case, if the screen was put in the way of the camera and the diamond was secretly stolen, we would still be successful even if we didn’t elicit that fact, but instead elicited some poly fragment that got the human to answer disapprove?
Like the thing that seems weird to me here is that you can’t simultaneously require that the elicited knowledge be ‘relevant’ and ‘comprehensible’ and also cover these sorts of obfuscated debate like scenarios.
Does it seem right to you that ELK is about eliciting latent knowledge that causes an update in the correct direction, regardless of whether that knowledge is actually relevant?
I feel mostly confused by the way that things are being framed. ELK is about the human asking for various poly-sized fragments and the model reporting what those actually were instead of inventing something else. The model should accurately report all poly-sized fragments the human knows how to ask for.
Like the thing that seems weird to me here is that you can’t simultaneously require that the elicited knowledge be ‘relevant’ and ‘comprehensible’ and also cover these sorts of obfuscated debate like scenarios.
I don’t know what you mean by “relevant” or “comprehensible” here.
Does it seem right to you that ELK is about eliciting latent knowledge that causes an update in the correct direction, regardless of whether that knowledge is actually relevant?
I feel mostly confused by the way that things are being framed. ELK is about the human asking for various poly-sized fragments and the model reporting what those actually were instead of inventing something else. The model should accurately report all poly-sized fragments the human knows how to ask for.
I think this is what I was missing. I was incorrectly thinking of the system as generating poly-sized fragments.
So if there are different poly fragments that the human would evaluate differently, is ELK just “giving them a fragment such that they come to the correct conclusion” even if the fragment might not be the right piece.
E.g. in the SmartVault case, if the screen was put in the way of the camera and the diamond was secretly stolen, we would still be successful even if we didn’t elicit that fact, but instead elicited some poly fragment that got the human to answer disapprove?
Like the thing that seems weird to me here is that you can’t simultaneously require that the elicited knowledge be ‘relevant’ and ‘comprehensible’ and also cover these sorts of obfuscated debate like scenarios.
Does it seem right to you that ELK is about eliciting latent knowledge that causes an update in the correct direction, regardless of whether that knowledge is actually relevant?
I feel mostly confused by the way that things are being framed. ELK is about the human asking for various poly-sized fragments and the model reporting what those actually were instead of inventing something else. The model should accurately report all poly-sized fragments the human knows how to ask for.
I don’t know what you mean by “relevant” or “comprehensible” here.
This doesn’t seem right to me.
Thanks for taking the time to explain this!
I think this is what I was missing. I was incorrectly thinking of the system as generating poly-sized fragments.