I think that in ordinary usage, whatever sort of things humans have, that’s what we mean when we say ‘belief’, ‘goal’, etc. Insofar as anyone thinks those are crisp mathematical abstractions, that seems like a separate and additional claim. I worry that saying ‘humans don’t actually have beliefs’ makes it pretty unclear what ‘belief’ even means[1].
As James points out in another comment, the ‘quasi-’ framing is solely intended to set aside questions about whether LLM beliefs (etc) are ‘real’ beliefs and whether they’re fundamentally the same as human beliefs, not to take a stance that they’re not. Chalmers: ‘Quasi-interpretivism does not say anything about whether LLMs have beliefs and desires’. There are a lot of interesting and safety-relevant discussions to be had about what LLMs believe in a practical sense (eg ‘Does this model believe that Paris is in France or Germany?’), and I see this terminology as basically just a way to prevent such discussions from being counterproductively derailed by questions about whether a model can actually believe anything at all.
Maybe it’s suggesting a highly deflationary stance, in the same way that illusionists think humans aren’t actually conscious? But consciousness is a highly abstract and contested topic, whereas there’s a pretty ordinary and uincontested sense in which humans believe things, have desires, etc.
Seems worthwhile as a way to simplify conversations with people who seem to be too be confused, but I think this isn’t a reality mapping exercise and probably makes it harder to see the structure of reality which is kinda sad even if useful for talking with some people?
I think that in ordinary usage, whatever sort of things humans have, that’s what we mean when we say ‘belief’, ‘goal’, etc. Insofar as anyone thinks those are crisp mathematical abstractions, that seems like a separate and additional claim. I worry that saying ‘humans don’t actually have beliefs’ makes it pretty unclear what ‘belief’ even means[1].
As James points out in another comment, the ‘quasi-’ framing is solely intended to set aside questions about whether LLM beliefs (etc) are ‘real’ beliefs and whether they’re fundamentally the same as human beliefs, not to take a stance that they’re not. Chalmers: ‘Quasi-interpretivism does not say anything about whether LLMs have beliefs and desires’. There are a lot of interesting and safety-relevant discussions to be had about what LLMs believe in a practical sense (eg ‘Does this model believe that Paris is in France or Germany?’), and I see this terminology as basically just a way to prevent such discussions from being counterproductively derailed by questions about whether a model can actually believe anything at all.
Maybe it’s suggesting a highly deflationary stance, in the same way that illusionists think humans aren’t actually conscious? But consciousness is a highly abstract and contested topic, whereas there’s a pretty ordinary and uincontested sense in which humans believe things, have desires, etc.
Seems worthwhile as a way to simplify conversations with people who seem to be too be confused, but I think this isn’t a reality mapping exercise and probably makes it harder to see the structure of reality which is kinda sad even if useful for talking with some people?