I’d rather your “that is” were a “for example”. This is because:
It’s also possible for the process of updates to not be getting arbitrarily close to any endpoint (with a notion of closeness that is imo appropriate for this context), without there being any sentence on which one keeps changing one’s mind. If we’re thinking of one’s “ethical state of mind” as being given by the probabilities one assigns to some given countable collection of sentences, then here I’m saying that it can be reasonable to use a notion of convergence which is stronger than pointwise convergence. For math, if one just runs a naive proof search and assigns truth value 1 to proven sentences and 0 to disproven sentences, one could try to say this sequence of truth value assignments is converging to the assignment that gives 1 to all provable sentences and 0 to all disprovable sentences (and whatever the initialization assigns to all independent sentences, let’s say), but I think that in our context of imagining some long reflection getting close to something in finite time, it’s more reasonable to say that one isn’t converging to anything in this example — it seems pretty intuitive to say that after any finite number of steps, one hasn’t really made much progress toward this kinda-endpoint (after all, one will have proved only finitely many things, and one still has infinitely many more things left to prove). Bringing this a tad closer to ethical reality: we could perhaps imagine someone repeatedly realizing that projects they hadn’t really considered before are worth working on, infinitely many times, with what they are up to thus changing [by a lot] [infinitely many times]. To spell out the connection of this to the math example a bit more: the common point is that novelty can appear in the sentences/things considered, so one can have novelty even if novelty doesn’t keep showing up in how one relates to any given sentence/thing. I say more about these themes here.
I’d rather your “that is” were a “for example”. This is because:
It’s also possible for the process of updates to not be getting arbitrarily close to any endpoint (with a notion of closeness that is imo appropriate for this context), without there being any sentence on which one keeps changing one’s mind. If we’re thinking of one’s “ethical state of mind” as being given by the probabilities one assigns to some given countable collection of sentences, then here I’m saying that it can be reasonable to use a notion of convergence which is stronger than pointwise convergence. For math, if one just runs a naive proof search and assigns truth value 1 to proven sentences and 0 to disproven sentences, one could try to say this sequence of truth value assignments is converging to the assignment that gives 1 to all provable sentences and 0 to all disprovable sentences (and whatever the initialization assigns to all independent sentences, let’s say), but I think that in our context of imagining some long reflection getting close to something in finite time, it’s more reasonable to say that one isn’t converging to anything in this example — it seems pretty intuitive to say that after any finite number of steps, one hasn’t really made much progress toward this kinda-endpoint (after all, one will have proved only finitely many things, and one still has infinitely many more things left to prove). Bringing this a tad closer to ethical reality: we could perhaps imagine someone repeatedly realizing that projects they hadn’t really considered before are worth working on, infinitely many times, with what they are up to thus changing [by a lot] [infinitely many times]. To spell out the connection of this to the math example a bit more: the common point is that novelty can appear in the sentences/things considered, so one can have novelty even if novelty doesn’t keep showing up in how one relates to any given sentence/thing. I say more about these themes here.
Thank you, that is a great point.