Assume that a being B with human-level intelligence takes on an arbitrary belief set (“worldmodel”) that is not Mormonism, and that this being has unlimited time in which to experiment and test its beliefs while in the observable universe (i.e. in a region causally closed with respect to what some human or clippy can observe).
Assume B changes its worldmodel in response to experimentation so as to fit all past observations, while changing it as little as possible. Assume further that B seeks out observations most likely to change its worldmodel.
Will B eventually contain a permanent Mormon worldmodel?
(Note: this is just the expanded version of the question, “Is Mormonism correct reasoning?”)
If I presented the initial scenario, it would be to find out whether calcsam would remain a Mormon after he contemplated the scenario. My guess is that your motives were similar.
However, your follow-up looks like you’re collapsing “Is Mormonism correct reasoning?” into a single question—I think it’s more optimal to split the question into parts, as others have done in this thread.
That is true. However, my question had two purposes:
1) To determine if and how Mormonism is correct reasoning (and so how an arbitrary belief set would converge on it)
2) Failing 1), to determine if User:calcsam is such that querying User:calcsam could efficiently lead to answers to 1).
A human interested in providing informative evidence to 1), and who believed it to be true, would provide additional substantiation beyond answering in the affirmative. Therefore, while User:calcsam technically answered the question I posed by saying “yes”, and while such an answer is indeed uninformative, I still achieved a main objective in posing the question, which was to determine whether this thread and User:calcsam are a viable method of learning significant information about important aspects of reality. I now infer that, with high probability, they are not.
Assume that a being B with human-level intelligence takes on an arbitrary belief set (“worldmodel”) that is not Mormonism, and that this being has unlimited time in which to experiment and test its beliefs while in the observable universe (i.e. in a region causally closed with respect to what some human or clippy can observe).
Assume B changes its worldmodel in response to experimentation so as to fit all past observations, while changing it as little as possible. Assume further that B seeks out observations most likely to change its worldmodel.
Will B eventually contain a permanent Mormon worldmodel?
(Note: this is just the expanded version of the question, “Is Mormonism correct reasoning?”)
Yes.
What is your substantiation?
If I presented the initial scenario, it would be to find out whether calcsam would remain a Mormon after he contemplated the scenario. My guess is that your motives were similar.
However, your follow-up looks like you’re collapsing “Is Mormonism correct reasoning?” into a single question—I think it’s more optimal to split the question into parts, as others have done in this thread.
That is true. However, my question had two purposes:
1) To determine if and how Mormonism is correct reasoning (and so how an arbitrary belief set would converge on it)
2) Failing 1), to determine if User:calcsam is such that querying User:calcsam could efficiently lead to answers to 1).
A human interested in providing informative evidence to 1), and who believed it to be true, would provide additional substantiation beyond answering in the affirmative. Therefore, while User:calcsam technically answered the question I posed by saying “yes”, and while such an answer is indeed uninformative, I still achieved a main objective in posing the question, which was to determine whether this thread and User:calcsam are a viable method of learning significant information about important aspects of reality. I now infer that, with high probability, they are not.