My current best guess for how to formalize frames is as a combination of Garrabrant inductors, RIAs, and some kind of voting system: traders would earn wealth from a combination of making predictions and directly influencing decisions, and then spend that wealth to determine which values should be used as evaluation criteria for the decisions
My two cents: I think the phenomena you’re emphasizing are roughly-accurately characterized in terms of uncertainty over binding: i.e. uncertainty over the preconditions under which various Bayesian submodels apply, and how to map the “variables” in those submodels to stuff-in-the-world. Examples:
“Cats are carnivorous” → … usually binds to a given cat in given circumstances, but we’re not confident that it binds to all cats in all circumstances.
“Bob is tall” → … assuming you’re looking at the right person, but you’re not confident that “Bob” in this model binds to the person you’re looking at. (I think the part about “tall” being relative is a separate phenomenon from the main thing you’re gesturing at.)
“America is beautiful” → (I think this is a qualitatively different thing which is getting conflated with the main category of stuff you’re gesturing at.)
“When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. …” → The flat-model does make accurate predictions locally, i.e. over small enough chunks of the Earth’s surface, so it’s useful insofar as we restrict its binding to that range. The sphere model binds accurately over a much wider range.
“two models which both make accurate predictions, but where the questions they predict don’t fully overlap” → The two models bind to different real-world situations.
Just a quick response: I don’t really understand what you mean by “binding”, got a link to where you discuss it more? But just going off my intuitive reading of it: the “binding” thing feels like a kinda Kripkean way of thinking about reference—something either binds or it doesn’t—which I am not a fan of. I think in more Russellian terms: we have some concept, which has some properties, and there’s a pretty continuous spectrum over how well those properties describe some entity in the world.
For example, when I’m uncertain about “cats are carnivorous”, it’s not that I have a clear predicate “carnivorous” and I’m uncertain which cats it “binds” to. Rather, I think that probably almost all cats are kinda carnivorous, e.g. maybe they strongly prefer meat, and if they never get meat they’ll eventually end up really unhealthy, but maybe they’d still survive, idk...
My two cents: I think the phenomena you’re emphasizing are roughly-accurately characterized in terms of uncertainty over binding: i.e. uncertainty over the preconditions under which various Bayesian submodels apply, and how to map the “variables” in those submodels to stuff-in-the-world. Examples:
“Cats are carnivorous” → … usually binds to a given cat in given circumstances, but we’re not confident that it binds to all cats in all circumstances.
“Bob is tall” → … assuming you’re looking at the right person, but you’re not confident that “Bob” in this model binds to the person you’re looking at. (I think the part about “tall” being relative is a separate phenomenon from the main thing you’re gesturing at.)
“America is beautiful” → (I think this is a qualitatively different thing which is getting conflated with the main category of stuff you’re gesturing at.)
“When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. …” → The flat-model does make accurate predictions locally, i.e. over small enough chunks of the Earth’s surface, so it’s useful insofar as we restrict its binding to that range. The sphere model binds accurately over a much wider range.
“two models which both make accurate predictions, but where the questions they predict don’t fully overlap” → The two models bind to different real-world situations.
Just a quick response: I don’t really understand what you mean by “binding”, got a link to where you discuss it more? But just going off my intuitive reading of it: the “binding” thing feels like a kinda Kripkean way of thinking about reference—something either binds or it doesn’t—which I am not a fan of. I think in more Russellian terms: we have some concept, which has some properties, and there’s a pretty continuous spectrum over how well those properties describe some entity in the world.
For example, when I’m uncertain about “cats are carnivorous”, it’s not that I have a clear predicate “carnivorous” and I’m uncertain which cats it “binds” to. Rather, I think that probably almost all cats are kinda carnivorous, e.g. maybe they strongly prefer meat, and if they never get meat they’ll eventually end up really unhealthy, but maybe they’d still survive, idk...