I am very inexperienced in epistemology, so forgive me if I’m making a simple error.
But it sounds like everything important in your theory is stuck into a black box in the words “criticize the idea”.
Suppose we had a computer program designed to print the words “I like this idea” to any idea represented as a string with exactly 5 instances of the letter ‘e’ in it, and the words “I dislike this idea because it has the wrong number of ’e’s in it” to any other idea.
And suppose we had a second computer program designed to print “I like this idea” to any idea printed on blue paper, and “I dislike this idea because it is on the wrong color paper” to any idea printed on any other color of paper.
These two computers could run through your decision making process of generating and criticizing ideas, and eventually would settle on the first idea generated which was written on blue paper and which used the letter ‘e’ exactly five times.
So it would seem that for this process to capture what we mean by “truth”, you have to start out with some reasoners who already have a pretty good set of internal reasoning processes kind of like our own that they use when criticizing an idea.
But everything that’s interesting and difficult about epistemology is captured in that idea of “a pretty good set of internal reasoning processes kind of like our own that they use when criticizing an idea”, so really this decision-making process only works for entities that are already running a different epistemology that’s doing all the work.
It almost seems like a detached lever fallacy, where the lever is the ability to criticize ideas, and the machinery the lever is activating is the actual epistemology the agent is using.
I am very inexperienced in epistemology, so forgive me if I’m making a simple error.
But it sounds like everything important in your theory is stuck into a black box in the words “criticize the idea”.
Suppose we had a computer program designed to print the words “I like this idea” to any idea represented as a string with exactly 5 instances of the letter ‘e’ in it, and the words “I dislike this idea because it has the wrong number of ’e’s in it” to any other idea.
And suppose we had a second computer program designed to print “I like this idea” to any idea printed on blue paper, and “I dislike this idea because it is on the wrong color paper” to any idea printed on any other color of paper.
These two computers could run through your decision making process of generating and criticizing ideas, and eventually would settle on the first idea generated which was written on blue paper and which used the letter ‘e’ exactly five times.
So it would seem that for this process to capture what we mean by “truth”, you have to start out with some reasoners who already have a pretty good set of internal reasoning processes kind of like our own that they use when criticizing an idea.
But everything that’s interesting and difficult about epistemology is captured in that idea of “a pretty good set of internal reasoning processes kind of like our own that they use when criticizing an idea”, so really this decision-making process only works for entities that are already running a different epistemology that’s doing all the work.
It almost seems like a detached lever fallacy, where the lever is the ability to criticize ideas, and the machinery the lever is activating is the actual epistemology the agent is using.