I mean I was looming a fictional dialogue between me and Yudkowsky and it had my character casually bring up that they’re the author of “Soft Optimization Makes The Value Target Bigger”, which would imply that the model recognizes my thought patterns as similar to that document in vibe.
When you give it you-written text? What knowledge do you give it to reach that conclusion?
I mean I was looming a fictional dialogue between me and Yudkowsky and it had my character casually bring up that they’re the author of “Soft Optimization Makes The Value Target Bigger”, which would imply that the model recognizes my thought patterns as similar to that document in vibe.
I am using LessWrong shortform like Twitter it really shouldn’t be taken that seriously.