What I’m seeing is that a rational agreement software would require some kind of objective method for marking logical fallacies, which the logic checking AI would obviously be helpful with. Not sure why the rationalist agreement database would help with creating the logic checking AI, unless you mean it can act like a sort of “wizard” where you can go through your document with it one piece at a time and have a sort of “chat” with it about what the rationalist agreement database contains, fed to you in little carefully selected bits.
What I’m seeing is that a rational agreement software would require some kind of objective method for marking logical fallacies, which the logic checking AI would obviously be helpful with. Not sure why the rationalist agreement database would help with creating the logic checking AI, unless you mean it can act like a sort of “wizard” where you can go through your document with it one piece at a time and have a sort of “chat” with it about what the rationalist agreement database contains, fed to you in little carefully selected bits.