Mathematics/logical-truths are true in all possible worlds, so they never tell you in what world you are.
If you want to say something that is true in your particular world (but not necessarily in all worlds), you need some observations to narrow down what world you are in.
I don’t know how closely this matches the use in the sequence, but I think a sensible distinction between logical and causal pinpointing is: All the math parts of a statement are “logically pinpointed” and all the observation parts are “causally pinpointed”.
So basically, I think in theory you can reason about everything purely logically by using statements like “In subspace_of_worlds W: X”[1], and then you only need causal pinpointing before making decisions for evaluating what world you’re actually likely in.
You could imagine programming a world model where there’s the default assumption that non-tautological statements are about the world we’re in, and then a sentence like “Peter’s laptop is silver” would get translated into sth like “In subspace_of_worlds W_main: color(<x s.t. laptop(x) AND own(Peter, x)>, silver)”.
Most of the statements you reason with are of course about the world you’re in or close cousin worlds with only few modifications, though sometimes we also think about further away fiction worlds (e.g. HPMoR).
(Thanks to Kaarel Hänni for a useful conversation that lead up to this.)
It’s just a sketch, not a proper formalization. Maybe we rather want sth like statements of the form “if {...lots of context facts that are true in our world...} then X”.
Mathematics/logical-truths are true in all possible worlds, so they never tell you in what world you are.
If you want to say something that is true in your particular world (but not necessarily in all worlds), you need some observations to narrow down what world you are in.
I don’t know how closely this matches the use in the sequence, but I think a sensible distinction between logical and causal pinpointing is: All the math parts of a statement are “logically pinpointed” and all the observation parts are “causally pinpointed”.
So basically, I think in theory you can reason about everything purely logically by using statements like “In subspace_of_worlds W: X”[1], and then you only need causal pinpointing before making decisions for evaluating what world you’re actually likely in.
You could imagine programming a world model where there’s the default assumption that non-tautological statements are about the world we’re in, and then a sentence like “Peter’s laptop is silver” would get translated into sth like “In subspace_of_worlds W_main: color(<x s.t. laptop(x) AND own(Peter, x)>, silver)”.
Most of the statements you reason with are of course about the world you’re in or close cousin worlds with only few modifications, though sometimes we also think about further away fiction worlds (e.g. HPMoR).
(Thanks to Kaarel Hänni for a useful conversation that lead up to this.)
It’s just a sketch, not a proper formalization. Maybe we rather want sth like statements of the form “if {...lots of context facts that are true in our world...} then X”.