I think this would be the most useful, even if it was only partially completed, since even a partial database would help greatly with both finding previously unrecognized biases and with the logic checking AI. It may even make the latter possible without the natural language understanding that Nancy thinks would likely be needed for it.
What I’m seeing is that a rational agreement software would require some kind of objective method for marking logical fallacies, which the logic checking AI would obviously be helpful with. Not sure why the rationalist agreement database would help with creating the logic checking AI, unless you mean it can act like a sort of “wizard” where you can go through your document with it one piece at a time and have a sort of “chat” with it about what the rationalist agreement database contains, fed to you in little carefully selected bits.
I like the Rational Agreement Software project, which I’d consider improved collaboration software. That’s a good project. That’s an important project. That’s the fastest way to superhuman AI—combining the talents of multiple humans with technology. That’s probably the fastest way to solve all our problems—create an infrastructure to better harness our cognitive surplus.
You seemed to focus on creating agreement—I think we’d be doing pretty well just to speed up the cycle time for improving our arguments, and accurately communicating them. Get a bunch of people together, get them all typing, get them providing feedback, and iterate in a way that keeps track of the history, but leaves and improved and concise summary at each iteration.
When a statement is incorrect, it will tend to follow a certain pattern. Change out the subject words and you get the same pattern. For instance, hasty generalization:
All bleeps are bloops.
All gerbils are purple.
All Asians are smart.
These are all false reasoning, the falseness is inherent to the sentence structure such that if we change out “bleeps” and “bloops” for any other subject, it’s still hasty generalization. If we were to build a piece of software that allows users to flag a statement for review, the reviewer could be given the statement with different subject words. For instance, if someone argues a piece of obvious bad reasoning like “All black people are bad.” the reviewer might be given “All oranges are bad.” Without race to potentially trigger the reviewer’s bias, the reviewer can plainly see that the sentence is a hasty generalization. This will help prevent bias and politics from interfering with the rational review of statements.
If that’s not found to be good enough alone, we could use it as part of a larger strategy.
If the difference between a hasty generalization and a fact is that the fact is true, then to call something a hasty generalization we need to say something about its factualness.
Not all claims of the form “all Xs are Ys” are false, and neither is every conclusion of the form “all Xs are Ys” a product of bad reasoning. Suppose your software were to replace “All electrons are negatively charged” with “All rabbits are highly educated”. How is the reviewer supposed to react? Is she supposed to conclude that the original statement is false? Why?
You are using the phrase “hasty generalization” in a highly non-standard way here. Philosophers classify hasty generalization as an informal fallacy. The “informal” means that the content matters, not just the formal structure of the argument. Also, a hasty generalization is an argument, not a single sentence. An example of a hasty generalization would be, “I was cut off by an Inuit driver on the way to work today. All Inuits must be terrible drivers.” The fallacy is that the evidence being appealed to (being cut off by one Inuit driver) involves much too small a sample for a reliable generalization to an entire population. But just looking at the formal structure of the argument isn’t going to tell you this.
There are formal fallacies, where the fallacy in the reasoning is subject-independent. An example would be “affirming the consequent”—arguing that since B is true and A implies B, A must also be true. You could build the kind of software you envisage for formal fallacies, but you’d need another strategy for catching and dealing with informal fallacies.
Idea: Rational Agreement Software
I think this would be the most useful, even if it was only partially completed, since even a partial database would help greatly with both finding previously unrecognized biases and with the logic checking AI. It may even make the latter possible without the natural language understanding that Nancy thinks would likely be needed for it.
What I’m seeing is that a rational agreement software would require some kind of objective method for marking logical fallacies, which the logic checking AI would obviously be helpful with. Not sure why the rationalist agreement database would help with creating the logic checking AI, unless you mean it can act like a sort of “wizard” where you can go through your document with it one piece at a time and have a sort of “chat” with it about what the rationalist agreement database contains, fed to you in little carefully selected bits.
I like the Rational Agreement Software project, which I’d consider improved collaboration software. That’s a good project. That’s an important project. That’s the fastest way to superhuman AI—combining the talents of multiple humans with technology. That’s probably the fastest way to solve all our problems—create an infrastructure to better harness our cognitive surplus.
You seemed to focus on creating agreement—I think we’d be doing pretty well just to speed up the cycle time for improving our arguments, and accurately communicating them. Get a bunch of people together, get them all typing, get them providing feedback, and iterate in a way that keeps track of the history, but leaves and improved and concise summary at each iteration.
Sorting opinion and fact with code:
When a statement is incorrect, it will tend to follow a certain pattern. Change out the subject words and you get the same pattern. For instance, hasty generalization:
All bleeps are bloops. All gerbils are purple. All Asians are smart.
These are all false reasoning, the falseness is inherent to the sentence structure such that if we change out “bleeps” and “bloops” for any other subject, it’s still hasty generalization. If we were to build a piece of software that allows users to flag a statement for review, the reviewer could be given the statement with different subject words. For instance, if someone argues a piece of obvious bad reasoning like “All black people are bad.” the reviewer might be given “All oranges are bad.” Without race to potentially trigger the reviewer’s bias, the reviewer can plainly see that the sentence is a hasty generalization. This will help prevent bias and politics from interfering with the rational review of statements.
If that’s not found to be good enough alone, we could use it as part of a larger strategy.
Every square is a rhombus
If the difference between a hasty generalization and a fact is that the fact is true, then to call something a hasty generalization we need to say something about its factualness.
Not all claims of the form “all Xs are Ys” are false, and neither is every conclusion of the form “all Xs are Ys” a product of bad reasoning. Suppose your software were to replace “All electrons are negatively charged” with “All rabbits are highly educated”. How is the reviewer supposed to react? Is she supposed to conclude that the original statement is false? Why?
You are using the phrase “hasty generalization” in a highly non-standard way here. Philosophers classify hasty generalization as an informal fallacy. The “informal” means that the content matters, not just the formal structure of the argument. Also, a hasty generalization is an argument, not a single sentence. An example of a hasty generalization would be, “I was cut off by an Inuit driver on the way to work today. All Inuits must be terrible drivers.” The fallacy is that the evidence being appealed to (being cut off by one Inuit driver) involves much too small a sample for a reliable generalization to an entire population. But just looking at the formal structure of the argument isn’t going to tell you this.
There are formal fallacies, where the fallacy in the reasoning is subject-independent. An example would be “affirming the consequent”—arguing that since B is true and A implies B, A must also be true. You could build the kind of software you envisage for formal fallacies, but you’d need another strategy for catching and dealing with informal fallacies.