RobBB, how did you make the diagrams, & how long did writing this post take?
With the help of an inductive algorithm that uses bridge hypotheses to relate sensory data to a continuous physical universe, we can avoid making our AIs Cartesians. This will make their epistemologies much more secure.
Is this whole post about a problem that only applies in odd cases, such as considering the possibility that someone is inserting bits into your brain, that real humans need never consider? Does avoiding Cartesianism make every-day epistemology more secure, or is it something needed only for the epistemological certainty needed for FAI? I suspect it is the latter, since most humans are Cartesians. It would help to have an example of how this is a problem for Alice in a real-world situation of the kind humans regularly experience.
RobBB, how did you make the diagrams, & how long did writing this post take?
Is this whole post about a problem that only applies in odd cases, such as considering the possibility that someone is inserting bits into your brain, that real humans need never consider? Does avoiding Cartesianism make every-day epistemology more secure, or is it something needed only for the epistemological certainty needed for FAI? I suspect it is the latter, since most humans are Cartesians. It would help to have an example of how this is a problem for Alice in a real-world situation of the kind humans regularly experience.