A further wrinkle / another example is that a question like “what should I think about (in particular, what to gather information about / update about)”, during the design process, wants these predictions.
Yes; this (or something similar) is why I suspect that “‘believing in’ atoms” may involve the same cognitive structure as “‘believing in’ this bakery I am helping to create” or “‘believing in’ honesty” (and a different cognitive structure, at least for ideal minds, from predictions about outside events). The question of whether to “believe in” atoms can be a question of whether to invest in building out and maintaining/tuning an ontology that includes atoms.
(FYI, I initially failed to parse this because I interpreted “‘believing in’ atoms” as something like “atoms of ‘believing in’”, presumably because the idea of “believing in” I got from your post was not something that you typically apply to atoms.)
Yes; this (or something similar) is why I suspect that “‘believing in’ atoms” may involve the same cognitive structure as “‘believing in’ this bakery I am helping to create” or “‘believing in’ honesty” (and a different cognitive structure, at least for ideal minds, from predictions about outside events). The question of whether to “believe in” atoms can be a question of whether to invest in building out and maintaining/tuning an ontology that includes atoms.
(FYI, I initially failed to parse this because I interpreted “‘believing in’ atoms” as something like “atoms of ‘believing in’”, presumably because the idea of “believing in” I got from your post was not something that you typically apply to atoms.)