Interesting post! I think that you got a heavier weight of octopuses partly down to the narrower range of models you tested (the 30% partly came out because of the range of models tested—individual models had stronger preferences).
I think there’s also a difference in the system prompt used for API vs chat usage (in that I imagine there is none for the API). This would be my main guess for why you got significantly more corvids—I’ve seen both this and the increased octopus frequency when doing small tests in chat.
On the actual topic of your post, I’d guess that the conclusion is AI’s metacognitive capabilities are situation dependent? The question would then be in what situations it can/can’t reason about its thought process.
Interesting post! I think that you got a heavier weight of octopuses partly down to the narrower range of models you tested (the 30% partly came out because of the range of models tested—individual models had stronger preferences).
I think there’s also a difference in the system prompt used for API vs chat usage (in that I imagine there is none for the API). This would be my main guess for why you got significantly more corvids—I’ve seen both this and the increased octopus frequency when doing small tests in chat.
On the actual topic of your post, I’d guess that the conclusion is AI’s metacognitive capabilities are situation dependent? The question would then be in what situations it can/can’t reason about its thought process.