I honestly don’t know enough about law to provide the kind of detailed mistake you’re looking for. My belief that it is a somewhat ‘important’ problem is circumstantial, but I think there’s definitely gain to be had:
1) It is often said that bad law consistently applied is better than good law inconsistently applied, but all other things being equal, good law is better than bad law. It is generally accepted that it is possible to have ‘good’ law which is better than ‘bad’ law, and I take this as evidence that it’s at least possible to have good law and bad law.
2) Law is currently pretty ambiguous, at least compared to software. These ambiguities are typically resolved at run time, by the court system. If we can resolve some of these ambiguities earlier with automated software, it may be possible to reduce the run time overhead of court cases.
3) Law is written in an internally inconsistent language. The words are natural language words, and do not have well understood, well defined meanings in all cases. A checker could plausibly identify and construct a dictionary of the most consistent words and definitions, and perhaps encourage new law makers to either use better words, define undefined words, or to clarify the meaning of questionable passages. By reducing even a subset of words to a well defined, consistent definition, the law may become easier to read, understand and apply.
4) An automated system could possibly reduce the body of law in general by eliminating redundancy, overlapping logic, and obsolete/unreferenced sections.
Currently, we do all of the above anyway, but we use humans and human brains to do it, and we allow for human error by having huge amounts of redundancy and failsafe. The idea that we could liberate even some of those minds to work on harder problems is appealing to me.
What if we did this: If a program can detect “natural language words” and encourage humans to re-write until the language is very, very clear, then this could open up the process of lawmaking to the other processing tasks you’re describing, without having to write natural language processing software.
It would also be useful to other fields where computer-processed language would be beneficial. THOSE fields could translate their natural language into language that computers can understand, then process it with a computer.
And if, during the course of using the software, the software is given access to both the “before” text (that it as marked as “natural language, please reword”) AND the “after” text (the precise, machine readable language which the human has changed it to) then one would have the opportunity to use those changes as part of a growing dictionary, from which it translates natural language into readable language on it’s own.
At which point, it would be capable of natural language processing.
I bet there are already projects like this one out there—I know of a few AI projects where they use input from humans to improve the AI like Microsoft’s Milo (ted.com has a TED Talk video on this) but I don’t know if any of them are doing this translation of natural language into machine-readable language, and then back.
Anyway, we seem to have solved the problem of how to get the software to interpret natural language. Here’s the million dollar question:
Would it work, business-wise, to begin with a piece of software that acts as a text editor, is designed to highlight ambiguities and anonymously returns the before and after text to a central database?
If yes, all the rest of this stuff is possible. If no, or if some patent hoarder has taken that idea, then … back to figuring stuff out. (:
An idea from a book called The Death of Common Sense—language has very narrow bandwidth compared to the world, which means that laws can never cover all the situations that the laws are intended to cover.
language has very narrow bandwidth compared to the world, which means that laws can never cover all the situations that the laws are intended to cover.
I would like to see a few examples of different types of mistakes have ended up in real laws and what you think we would gain by doing this.
I honestly don’t know enough about law to provide the kind of detailed mistake you’re looking for. My belief that it is a somewhat ‘important’ problem is circumstantial, but I think there’s definitely gain to be had:
1) It is often said that bad law consistently applied is better than good law inconsistently applied, but all other things being equal, good law is better than bad law. It is generally accepted that it is possible to have ‘good’ law which is better than ‘bad’ law, and I take this as evidence that it’s at least possible to have good law and bad law.
2) Law is currently pretty ambiguous, at least compared to software. These ambiguities are typically resolved at run time, by the court system. If we can resolve some of these ambiguities earlier with automated software, it may be possible to reduce the run time overhead of court cases.
3) Law is written in an internally inconsistent language. The words are natural language words, and do not have well understood, well defined meanings in all cases. A checker could plausibly identify and construct a dictionary of the most consistent words and definitions, and perhaps encourage new law makers to either use better words, define undefined words, or to clarify the meaning of questionable passages. By reducing even a subset of words to a well defined, consistent definition, the law may become easier to read, understand and apply.
4) An automated system could possibly reduce the body of law in general by eliminating redundancy, overlapping logic, and obsolete/unreferenced sections.
Currently, we do all of the above anyway, but we use humans and human brains to do it, and we allow for human error by having huge amounts of redundancy and failsafe. The idea that we could liberate even some of those minds to work on harder problems is appealing to me.
What if we did this: If a program can detect “natural language words” and encourage humans to re-write until the language is very, very clear, then this could open up the process of lawmaking to the other processing tasks you’re describing, without having to write natural language processing software.
It would also be useful to other fields where computer-processed language would be beneficial. THOSE fields could translate their natural language into language that computers can understand, then process it with a computer.
And if, during the course of using the software, the software is given access to both the “before” text (that it as marked as “natural language, please reword”) AND the “after” text (the precise, machine readable language which the human has changed it to) then one would have the opportunity to use those changes as part of a growing dictionary, from which it translates natural language into readable language on it’s own.
At which point, it would be capable of natural language processing.
I bet there are already projects like this one out there—I know of a few AI projects where they use input from humans to improve the AI like Microsoft’s Milo (ted.com has a TED Talk video on this) but I don’t know if any of them are doing this translation of natural language into machine-readable language, and then back.
Anyway, we seem to have solved the problem of how to get the software to interpret natural language. Here’s the million dollar question:
Would it work, business-wise, to begin with a piece of software that acts as a text editor, is designed to highlight ambiguities and anonymously returns the before and after text to a central database?
If yes, all the rest of this stuff is possible. If no, or if some patent hoarder has taken that idea, then … back to figuring stuff out. (:
An idea from a book called The Death of Common Sense—language has very narrow bandwidth compared to the world, which means that laws can never cover all the situations that the laws are intended to cover.
This is the story of human law.