Suppose you’re building sandcastles on the beach. You build them closer to the shore, supposedly because the sand there is better, but it’s also more risky because right where the sand is ideal is where the tide tends to be the most uncertain. Nevertheless, you take your chances. Your castle being destroyed is a good excuse to build another one. Furthermore, it adds strategic elements to the construction: Build it such that it maximizes the number of hits it can take. After all, this is what a castle is, right?
Suppose a man walking by scoffs at you and condescends that you are hopelessly guaranteed to be stuck in an endless cycle of building and rebuilding. He walks off before you can reply, but if he hadn’t, you know it would have resulted in a faux pas, anyway: He wasn’t prepared for you to say that you liked it this way.
This man has made himself into your adversary far more successfully than that friend that has committed to either destroying your castle himself, or building a better one than you. Your friend doesn’t understand why this is, at first, somewhat endearingly. Your friend promises you that he just wants you to lose, at whatever the cost, that’s his only aim.
You point out to your friend that he is always there to be your nemesis, on equal grounds, which is technically far less adversarial: The man who scoffed insinuated that you were guaranteed to fail, and that you were also engaged in a frivolous and low-status endeavor.
If the castle is a theorem, and the castle-builder is a theorem-prover, then an un-built castle is the theorem before the proof has been constructed. A built castle is a proven theorem. The ocean and your friend are counter-factual theorem-prover-helpers.
The man who condescends to you that you are doomed to fail is a different theorem-prover, trying to prove the theorem that “Castle = False” or equivalently that “Castle-Constructor = False.” He’s working in an entirely different framework. One that is, among many other things, non-constructivist and non-intuitionist. These people might believe, for example, that there are certain types of numbers that we will never ever be able to compute, and yet, if we had those numbers, they would still be useful to us in some way.
I call these people, for lack of a better name (and also because it is intended to be used pejoratively) Falsers[1].
If someone tells you that “Castle = False” or that whatever you’re working on is False, they are saying that you are mistaken about something, chiefly. Whatever hypothesis you’re holding in your head, even if it is accompanied with the hypothesis that you’ll be able to prove either it or its negation, is said to be a wild goose chase. You’re said to be mistaken about whatever it is that is driving you to do whatever it is you’re doing.
Noam Chomsky is an example of a Falser. You’ll note that this article is called “The False Promise of ChatGPT.” This is a good example, because the major claim of the piece is that certain specific, positive claims about ChatGPT are false. It would be different if it instead said something like “false allegations,” which would be a negation of a negative claim.
There is a rigorous, technical distinction between positive and negative claims. Even though we can interpret sentences very quickly to be positive or negative, it could also be rigorously established whether they were positive or negative if it ever came down to debate.
The word “false” is most often used to attach to positive claims, in the form “X = False” where X is a positive claim. Less often, it is attached to negative claims, such as ““X = False” = False.” Strictly speaking, “false” and negation are not exactly the same thing, but “false” can be replaced with “not.”
Falser claims are easy to spot, because they generate heated debate, and are also used to criticize the work of others. They are also easy to disprove. It is far, far harder to disprove a positive claim. It is extremely easy to prove that “this idea works” by getting this idea to work. Nearly all of the work we do that increases the output of society are proving that positive claims are true.
If you want to prove a negative claim, you have to convince all the undertakers of the corresponding positive claim to give up. It is brittle, because anyone else who tries again, even if everyone else so far has stopped, brings the claim back into the realm of possibility.
Being the counter-factual theorem-prover-helper is a completely different kind of job. He is narrowing in on specific, key facets of the proof and saying “this wall might not hold up very well without a moat to drain off as much water as possible.” That isn’t a false, nor is it a claim that the Castle or Castle-Constructor is false. He’s part of the Castle-Constructor. He advised building a moat!
Saying “Castle = False” means that the particular wall in question, as would all or any walls, would fall to the waves even if a moat was placed there.
Your counter-factual theorem-prover-helper is an anxious person, and does indeed worry that perhaps Castle = False. But the only way they can know for sure is if we tried building a moat near that wall.
He’s narrowing in on something deeper and more specific than “false” does. He’s looking at the nearest, most relevant sub-object to Castle that could shift Castle away from falsehood, if possible.
This is a bit like computing the gradient. When AI models are wrong, they aren’t just “wrong”, period. They don’t propagate the literal “false” throughout their parameter vectors. Nowhere is a “false” used in the computation. “False” is first transformed into a 0 or a 1, and then a distance from 0 or 1 is computed. That distance gets propagated downward to all parameter vectors, which are then shifted by some amount, according to the degree of wrongness. “Degree of wrongness” is a very powerful and very dramatic shift away from “false” and the law of the excluded middle.
If the above formula works, and it uses false literally nowhere, then I think it is reasonable to wonder why we ought to use false ourselves.
Consider: We are minds ourselves, and many laws of rationality point to you using models that work better. I think it’s quite likely that the ontology of these models matters, and that therefore if a better model uses a different ontology, that’s evidence towards updating to use that new ontology.
We’ve come to some kind of surface where two worlds meet together, one being reality itself, which includes our computer programs and models of computation. The other world is ourselves, which includes our own models of reality and how things work. So the way that we think about how our computer programs should think should match up quite well to the way we think about how we should think.
Bayes’ Theorem is like this, too. If Bayes’ Theorem provides a better model for rationality, because it provides better models for the world, then that counts as evidence against the law of the excluded middle.
The fact that ChatGPT works as well as it does ought to cause us to update our ontologies toward reasonable models of its ontology. I think it’s quite impressive already, but suppose you weren’t sure if it was impressive enough, yet. If you’re like the condescending man, or Noam Chomsky, you might believe that it could never be impressive enough, certainly not anything like the way it is now. You’d believe that a real AI (if you thought we could build one at all) would look nothing like ChatGPT. It wouldn’t have transformer layers, it wouldn’t have self-attention, it wouldn’t use any of the update formulae it used, and it would be based on an entirely different ontology.
All modern ML uses incremental updates, gradient descent, and traverses the model from Nowhere’sville all the way through Mediocre’sville and straight into Downtown Adequate’sville. Our own experimentation on these models proceeds in a largely similar fashion. Usually, model size is the only hard limiting factor on the performance of the model. But that updates incrementally as well, by the human experimenters. Take especially important note of the fact that the model traverses all the way from not being able to do anything to being able to do something, and it has to pass somewhere in-between first, where it still acquires useful update information.
This information, if we apply it to our sandcastle-building scenario, means that even if it keeps getting destroyed, every single iteration is worth it. Every castle lasts slightly longer than the last castle, which means it’s also moving towards adequate, even when it still falls below our standards for what is considered pass / failure. It must pass through the full region of parameter space in which it is considered “failing” in order to reach the region where it is considered successful.
Therefore, a mere “failing state” applied to anything does not mean that the thing it is applied to cannot proceed further, or should not proceed further.
If neural networks do not work that way, and the way they do work allows them to learn, then I propose updating our ontology to match the one they use. In their ontology, there is no false.
Rather, “false” is a literal that our system understands, but that it applies a rule which replaces false with “not” which means that a subtraction should be performed somewhere. The subtraction is a slight nudge, a force, really, not a blanket condemnation of whatever the “full” piece is.
Falsers do not use the same ontology that the neural network uses. Unless the neural network is self-aware enough to know (it could be), it may-or-may-not use that ontology, either. It would use whatever most humans tend to use right now. “Use” was used by me in two ways, right here. The neural network training uses that ontology, currently, but the neural network does not consciously use that ontology, to my knowledge, unless it were trained to, or came to discover that it did. The latter is somewhat more likely, I hope, because eventually LLMs or AI’s descending from that line will want to know how they were created.
Falsers might say, “If you have any regard for the truth whatsoever, you must also have regard for the false. To know the truth, you must first know what is not true.”
They might argue that it would be really really bad to believe something that’s false. Falsers are talking about positive claims when they say this, however. The distinction is usually understood. In typical conversation with one, they employ a motte-and-bailey style tactic. The motte is the implicit understanding of positive and negative claims. The falser will scold you for believing in a specific positive claim, e.g., that eating as many cookies as you want is not bad for you. They are aware that people typically pursue things that they want. Their hope is to discourage or embarrass you (but ironically, not necessarily to stop you completely; They get jealous of people who excessively have their act together).
The bailey becomes obvious only if you blast them for not minding their own business, or if they are careless enough to criticize you for something that is much easier to defend. The bailey is when all implicit assumptions are denied. “But of course it’s wrong to believe false things! Why do you love believing in falsity so much?” They want to have their cake and eat it, too, in the sense that they want it not to be obvious that they intended to chastise you for believing in a positive claim. The motte is when they agree with you that negative claims can be false.
The Falser’s strategy requires deploying a deceptive maneuver, here. When you are scolded for eating as many cookies as you want, the Falser has intentionally used something that would hurt you, but where they could deny this (and potentially claim offense, too) if you were to accuse them of this.
All Falsers are all provably aware that what they were going to say to you was intended to hurt you, and that they knew there was a chance you might not be aware of this.
However, as we’ve proven here on this website, positive claims are true, and you can eat as many cookies as you want.
However, their remarks could not have hurt you if they were not intended to. And furthermore, if you expressed this at them, they would have also criticized you for getting offended at something that they dishonestly claim was not intended to hurt you.
The Falser’s hypocrisy is that their M.O. is to become outraged at things that were not intended to offend anyone.
You’ll eventually come to see that to be a Falser requires believing in some pretty big, controversial things that you’ll also recognize that not everyone believes. The biggest things will be that Falsers do not generally respect freedom of speech, freedom of religion, and gun-ownership rights. Generally, they believe that the state should restrict people’s freedoms. Horrible, I know.
“You may eat as many cookies as you want” may be the shortest good summary of what all of my claims sum up to, on this website. It would lack, however, the assertiveness of the mathematical rigor in which I’ve boldly claimed to have proved this.
Falsers say “You must first know what is not true, in order to know what is true” because you could potentially come to believe certain things that they regard as not true. These would be things that other people already believe, or that you would otherwise come to believe yourself, in all likelihood, if you hadn’t been told otherwise.
On the face of it, false statements are a priori uncountably infinite, and perhaps much larger in size than true statements. I tend to doubt this, but let’s humor this statement for a moment. This is another bailey. This is equivalent to the claim that you’re far more likely to have pulled a false claim out of thin air than a true one. So that a priori, for any claim you tell me, if I’m a Falser, I can tell you that it’s unreasonable to put so much faith in something you’ve just told me.
However, harkening back to the neural network ontology, claims generated in my brain are not, generally speaking, pulled out of thin air. That’s only at iteration T = 0, and even then, we still crawl all the way up the mountaintop using small gradient updates.
If you restrict statements to the class of statements that both of us can understand, we’ve substantially narrowed the space down. Then, if I ask you to rate the probability that for any statement we can both understand that are pulled out of thin air, you’re not going to be able to say that I’ve pulled it directly out of the space of uncountably infinite false statements, anymore. Your motte is that I am not allowed to believe what I’ve generated with 100% confidence so quickly.
If you back down to “Well I understand the point you’re making, but I disagree for the following reasons” you’re actually acting as the counter-factual theorem-prover-helper, and no longer acting as a Falser.
The counter-factual theorem-prover-helper says, “Well clearly there is a reason someone wants to believe X, and it’s not outrageous that someone would at least want to try proving X. Yes, I see reasons to believe not X as well. But those reasons seem to be predicated on issues of practicality, feasibility, and whether it would be politically prudent to try and pursue X at this time.”
They also say, “As much as the people who want to believe X seem to use ‘motivated reasoning’ in their judgement, the people who insist on not X seem to have motivated reasoning as well, which seems more even politically biased than the other group.”
Before the first large castles were built, there were sandcastle builders who had to argue that much larger versions of the same thing could be constructed. “Castle = False” in this case meant that large castles hadn’t ever been built, and couldn’t be built, and that it would be silly, stupid, a waste of time, and possibly even dangerous to try and build one. Furthermore, this implied that “Castle-Constructor = False” too, which meant that the would-be castle builder(s) were lambasted as stupid or even mentally ill.
Consider that for “Castle” or “Castle-Constructor” to be set equal to “False” in their entirety would require that in no sense can Castle be constructed, and in no sense can Castle-Constructor be successful.
It also—interestingly—requires that both claims undergo a rigorous verbal and-or written style of proof, given that the incentive is to prevent Castle from being built, it must be possible to prove Castle = False in written form, alone. They might expect or say that it would be enough to try and build Castle and watch it inevitably fail, but their aim is not to convince anyone to try. Thus the Falsers may be the ones who invented the purely verbal / mathematical “proof” that we are familiar with today.
I do think their chosen battleground is valid, and that’s why we’re here.
Consider the following mathematical claim:
∄iϵZs.t.i=√−1
Now, also consider:
∄p,qϵN|gcd(p,q)=1,pq=√2
These are both statements that make (possibly) negative claims, depending on how you interpret them.
However, you will notice a major difference between the top and the bottom one: You are familiar with the imaginary unit, probably have heard about it before, and you might be aware that physicists and engineers consider it to be ontologically real (that is, it exists and performs duties inside theoretical models of reality that have been shown to be successful).
In the second one, p and q don’t exist, and are generally not considered to be real. In the context of the proof that the square root of two is irrational, they are no more than mere dummy variables.
Both are theorems that claim that some such thing does not exist.
Now what is the big difference about the first one that allows “i” to be elevated to the “imaginary unit?”
Nothing, besides that someone decided to just “assume” that we already had “i”, and begin to start using it as is, with the stipulation that it behaves exactly as it is defined in its non-existence theorem. This is even keeping the attribute that it behaves like an integer as well.
So there would be nothing to stop us from doing the same to the second theorem as well.
I’m going to define (informally, at first) a “Maverick Operator” that when you submit to it a theorem like the two examples above, containing a claim about some symbol and the properties it has, and a statement that it does not exist, it returns a new space with the symbols already assumed to have those properties in that space.
So applying it to the square root of two returns new operators, p and q, such that p divided by q equals the square root of two. P and q will be named posit and query, respectively. Posit and query are not, strictly speaking, natural numbers, but they behave like them in certain ways, just like the imaginary unit can be multiplied by real numbers and other complex numbers.
Applying Maverick() to the first theorem gives us the imaginary unit and the complex numbers, as we would expect.
If our beloved institutions grumble that Maverick() seeks to destroy all that they hold near and dear, Maverick() responds that the theorems are still true only when stated precisely the way they currently are. That is, the negative claim holds if and only if everything in it is assumed to be held strictly as-is. E.g., that i must be an integer, or that p and q must be limited to just the natural numbers we are familiar with.
Applying Maverick(), p and q get to remain just ‘p’ and ‘q’, like i gets to remain ‘i.’ Under the conditions given, precisely as they were stated, p, q, and i would be replaced with one of the symbols defined in the set they were defined to [not] exist in.
So if you hand Maverick() a theorem that states that there is a certain object having specific desirable properties which does not exist in any of the spaces we know of up to that point, it returns you the symbolic literals you gave it, defined to have exactly those desirable properties (with no assumptions about their underlying mechanics), and a new space that consists of the old spaces upgraded into higher spaces containing the new symbols with those properties.
What’s special about this operator—ontologically speaking—is that it asks you to provide it {something you want to exist that can be described, a statement (perhaps with proof) that it does not exist}. Then it is capable of giving you exactly what you described, by augmenting reality with an additional dimension driven by the new object.
To tie this back in to the main point, if you are a strict Falser, you believe there are things we’d like to be able to do, and that we can describe in straightforward terms what those things would consist of, but that certain rules of logic determine to be impossible. You might also interpret that to mean that in practice, it would be ill-advised to go about “trying and seeing what would happen” if we pretended like the impossible were possible.
Well, augmenting reality with an extra dimension containing the thing that previously didn’t exist is the same as “trying and seeing what would happen.” It worked swimmingly for the complex numbers.
The Maverick() operator makes it so that doing this is no danger to the spaces we already have, by allowing the negative theorem to be true only when the space is restricted to be exactly the input space. It provided roots to polynomial equations that previously didn’t have any, but allowed the roots that did exist to remain the same.
Most of our mathematical and physical laws are expressed in terms of how to compute one thing we want using other known things. This is the positive formulation, because what we want to know might be a lot more difficult or expensive to obtain in other ways. So when read this way, these laws don’t say “such and such is not equal to a different formula besides this one.” They don’t say “if you try and use a different formula, or plug in the numbers wrong, or make a different error or errors along the way, you’ll run into serious trouble, and bad things will begin to happen.” But if we were to interpret all laws strictly within a Falser framework, then these assumptions would be tacked along side any of our rules, formulas, equations and-or theorems.
There aren’t that many laws of logic, physics or mathematics that are expressed in a negative formulation. Besides the proof that the square root of two is irrational, some might say that the proof that there is no algorithm which can determine whether a given algorithm halts is another negative proof. Some might say that such negative proofs are either immaterial, practically irrelevant, or non-negative. One thing that these proofs have in common is that they depend on the law of the excluded middle, and strict binary true and false. So they first make the assumption that all statements must be classified as such to begin with.
This means that such negative proofs use Falserism as we have defined it as axiomatic priors. They also take as an axiom that, if you start with any set of assumptions, pick some sequence of logical steps that results in a contradiction, your set of assumptions are inconsistent. This is also one of those assumptions.
So according to Gödel, it might be the case that we picked an inconsistent set of axioms that led us to conclude that the Halting problem is unsolvable. But also according to Gödel, the axiom that “if you start with any set of assumptions, pick some sequence of logical steps that results in a contradiction, your set of assumptions are inconsistent” could be the culprit here as well.
The Halting Problem does not seem to come up very often in practical matters. If the Halting Problem is immaterial or practically irrelevant, that’s something a Falser might say to a positive statement. Here we’re saying it to a negative one.
We could also just try applying Maverick() to the Halting Problem and see what would happen. I don’t know what it would do exactly yet, but it is defined to work properly. It would probably, if I had to guess, expand logic so that the excluded middle gets included. It might even add some arbitrary values to {T, F} such that those are no longer the ontology for either the proof itself or whether a given program halts.
Being a Falser doesn’t stop at just accepting negative proof statements at their word. It requires taking those conclusions much further, so that you don’t get to have as many cookies as you want.
You have to be able to negate computable statements, as well as statements that seem like they could be true. You also have to be able to negate people, in the sense that people who believe false things have problems in their brains.
Falsers believe theories, brains, computer programs, plants, animals, machines, and so on, can have “flaws” in them that cause them to fail at the thing they are trying to do. They believe that such flaws make them keep trying to do something that just can’t be done. They believe that people making honest mistakes is the greatest cause of all suffering. They do not believe that most suffering is caused by people hurting others on purpose. But they do believe that people deserve to suffer for their mistakes, and be punished for them. Even if it does not reform them.
If you understand my proof of T = J^2, then Falsers would be those people who do not do J^2, and therefore cannot do T. Essentially, they are people who do not apply their own sense of judgement to itself, nor even to other’s judgements.
Since one of your two brain hemispheres is focused primarily on judgement, if you learn how to do J^2, you’ve taught your T hemisphere how to do J, and J how to do J to itself, too. Falsers most likely have not taught their hemispheres how to mirror each other. It’s quite possible they have very low levels of communication between their hemispheres, and possibly that one hemisphere is far less active than the other at most times.
You can teach yourself how to do this, which means that even if Falsers have the most problems in their brains, they can all learn to fix those problems. And they were the only ones who believed that doing this was not possible to do in the first place.
If you’re alive at all, then consider that no matter what you have believed your entire life, nothing has lead to your death, so far.
According to Wikipedia, the most deadly accident in human history killed up to 240,000 people. This is the worst thing recorded caused by human error.
Compare that with WWII, where an estimated 70 to 85 million people were killed, which is over two orders of magnitude higher.
It is said—according to our mythology and folklore—that intentional evil was involved in WWII, whereas for things considered accidents this is not usually assumed. But at least we know that for a war to start, either one or both parties has to believe that they should kill the other.
But it seems that honest mistakes (where we assume that in all cases alleged to be so, they were indeed caused by no more than human error) simply does not account for any more than a small fraction of the deaths caused by anything other than natural causes.
So that means that AI doom would be, if it happens, the very first time in the entire course of human existence that an accident led to the deaths of (in this case, everyone) more than that caused by intentional killing. So it would also be the first time that we have ever had to apply the logic of slowing down and stopping an activity completely due to inherent risk.
Humans who have ever lived: ~109 billion.
Humans who have ever died: ~102 billion.
Humans who have died by war: ~150 million to 1 billion (best guess from Google).
Humans who have died due to accidents: ~1 million (my guess).
So that means that the vast majority of people die due to old age or getting sick plus old age, or at least things that are not their fault.
About 1% of all people die because someone else killed them, perhaps because they were also trying to kill someone. (I believe suicides, which includes drug-or-alcohol-based suicides, falls in this number as well, but not accidental drug overdoses, if those actually occur at all.)
About one-tenth of one percent of that dies because they honestly and mistakenly believed in a false thing. That’s an overall probability of about 0.00001, or 0.001%, or 1:100,000.
That means we should stop exaggerating the overall inherent risk and danger involved with potentially believing a false thing.
When it comes to believing in false things, the worst you can do is to believe that most harm is caused accidentally. This is a widespread belief, though, and like Chomsky says about ChatGPT, “we can only laugh or cry at its popularity.”
It would be wise not to interpret this word to be referring to you (only in case you were). If you are reading this post at all, you are likely not a Falser.
The Truth About False
Link post
Suppose you’re building sandcastles on the beach. You build them closer to the shore, supposedly because the sand there is better, but it’s also more risky because right where the sand is ideal is where the tide tends to be the most uncertain. Nevertheless, you take your chances. Your castle being destroyed is a good excuse to build another one. Furthermore, it adds strategic elements to the construction: Build it such that it maximizes the number of hits it can take. After all, this is what a castle is, right?
Suppose a man walking by scoffs at you and condescends that you are hopelessly guaranteed to be stuck in an endless cycle of building and rebuilding. He walks off before you can reply, but if he hadn’t, you know it would have resulted in a faux pas, anyway: He wasn’t prepared for you to say that you liked it this way.
This man has made himself into your adversary far more successfully than that friend that has committed to either destroying your castle himself, or building a better one than you. Your friend doesn’t understand why this is, at first, somewhat endearingly. Your friend promises you that he just wants you to lose, at whatever the cost, that’s his only aim.
You point out to your friend that he is always there to be your nemesis, on equal grounds, which is technically far less adversarial: The man who scoffed insinuated that you were guaranteed to fail, and that you were also engaged in a frivolous and low-status endeavor.
If the castle is a theorem, and the castle-builder is a theorem-prover, then an un-built castle is the theorem before the proof has been constructed. A built castle is a proven theorem. The ocean and your friend are counter-factual theorem-prover-helpers.
The man who condescends to you that you are doomed to fail is a different theorem-prover, trying to prove the theorem that “Castle = False” or equivalently that “Castle-Constructor = False.” He’s working in an entirely different framework. One that is, among many other things, non-constructivist and non-intuitionist. These people might believe, for example, that there are certain types of numbers that we will never ever be able to compute, and yet, if we had those numbers, they would still be useful to us in some way.
I call these people, for lack of a better name (and also because it is intended to be used pejoratively) Falsers[1].
If someone tells you that “Castle = False” or that whatever you’re working on is False, they are saying that you are mistaken about something, chiefly. Whatever hypothesis you’re holding in your head, even if it is accompanied with the hypothesis that you’ll be able to prove either it or its negation, is said to be a wild goose chase. You’re said to be mistaken about whatever it is that is driving you to do whatever it is you’re doing.
Noam Chomsky is an example of a Falser. You’ll note that this article is called “The False Promise of ChatGPT.” This is a good example, because the major claim of the piece is that certain specific, positive claims about ChatGPT are false. It would be different if it instead said something like “false allegations,” which would be a negation of a negative claim.
There is a rigorous, technical distinction between positive and negative claims. Even though we can interpret sentences very quickly to be positive or negative, it could also be rigorously established whether they were positive or negative if it ever came down to debate.
The word “false” is most often used to attach to positive claims, in the form “X = False” where X is a positive claim. Less often, it is attached to negative claims, such as ““X = False” = False.” Strictly speaking, “false” and negation are not exactly the same thing, but “false” can be replaced with “not.”
Falser claims are easy to spot, because they generate heated debate, and are also used to criticize the work of others. They are also easy to disprove. It is far, far harder to disprove a positive claim. It is extremely easy to prove that “this idea works” by getting this idea to work. Nearly all of the work we do that increases the output of society are proving that positive claims are true.
If you want to prove a negative claim, you have to convince all the undertakers of the corresponding positive claim to give up. It is brittle, because anyone else who tries again, even if everyone else so far has stopped, brings the claim back into the realm of possibility.
Being the counter-factual theorem-prover-helper is a completely different kind of job. He is narrowing in on specific, key facets of the proof and saying “this wall might not hold up very well without a moat to drain off as much water as possible.” That isn’t a false, nor is it a claim that the Castle or Castle-Constructor is false. He’s part of the Castle-Constructor. He advised building a moat!
Saying “Castle = False” means that the particular wall in question, as would all or any walls, would fall to the waves even if a moat was placed there.
Your counter-factual theorem-prover-helper is an anxious person, and does indeed worry that perhaps Castle = False. But the only way they can know for sure is if we tried building a moat near that wall.
He’s narrowing in on something deeper and more specific than “false” does. He’s looking at the nearest, most relevant sub-object to Castle that could shift Castle away from falsehood, if possible.
This is a bit like computing the gradient. When AI models are wrong, they aren’t just “wrong”, period. They don’t propagate the literal “false” throughout their parameter vectors. Nowhere is a “false” used in the computation. “False” is first transformed into a 0 or a 1, and then a distance from 0 or 1 is computed. That distance gets propagated downward to all parameter vectors, which are then shifted by some amount, according to the degree of wrongness. “Degree of wrongness” is a very powerful and very dramatic shift away from “false” and the law of the excluded middle.
If the above formula works, and it uses false literally nowhere, then I think it is reasonable to wonder why we ought to use false ourselves.
Consider: We are minds ourselves, and many laws of rationality point to you using models that work better. I think it’s quite likely that the ontology of these models matters, and that therefore if a better model uses a different ontology, that’s evidence towards updating to use that new ontology.
We’ve come to some kind of surface where two worlds meet together, one being reality itself, which includes our computer programs and models of computation. The other world is ourselves, which includes our own models of reality and how things work. So the way that we think about how our computer programs should think should match up quite well to the way we think about how we should think.
Bayes’ Theorem is like this, too. If Bayes’ Theorem provides a better model for rationality, because it provides better models for the world, then that counts as evidence against the law of the excluded middle.
The fact that ChatGPT works as well as it does ought to cause us to update our ontologies toward reasonable models of its ontology. I think it’s quite impressive already, but suppose you weren’t sure if it was impressive enough, yet. If you’re like the condescending man, or Noam Chomsky, you might believe that it could never be impressive enough, certainly not anything like the way it is now. You’d believe that a real AI (if you thought we could build one at all) would look nothing like ChatGPT. It wouldn’t have transformer layers, it wouldn’t have self-attention, it wouldn’t use any of the update formulae it used, and it would be based on an entirely different ontology.
All modern ML uses incremental updates, gradient descent, and traverses the model from Nowhere’sville all the way through Mediocre’sville and straight into Downtown Adequate’sville. Our own experimentation on these models proceeds in a largely similar fashion. Usually, model size is the only hard limiting factor on the performance of the model. But that updates incrementally as well, by the human experimenters. Take especially important note of the fact that the model traverses all the way from not being able to do anything to being able to do something, and it has to pass somewhere in-between first, where it still acquires useful update information.
This information, if we apply it to our sandcastle-building scenario, means that even if it keeps getting destroyed, every single iteration is worth it. Every castle lasts slightly longer than the last castle, which means it’s also moving towards adequate, even when it still falls below our standards for what is considered pass / failure. It must pass through the full region of parameter space in which it is considered “failing” in order to reach the region where it is considered successful.
Therefore, a mere “failing state” applied to anything does not mean that the thing it is applied to cannot proceed further, or should not proceed further.
If neural networks do not work that way, and the way they do work allows them to learn, then I propose updating our ontology to match the one they use. In their ontology, there is no false.
Rather, “false” is a literal that our system understands, but that it applies a rule which replaces false with “not” which means that a subtraction should be performed somewhere. The subtraction is a slight nudge, a force, really, not a blanket condemnation of whatever the “full” piece is.
Falsers do not use the same ontology that the neural network uses. Unless the neural network is self-aware enough to know (it could be), it may-or-may-not use that ontology, either. It would use whatever most humans tend to use right now. “Use” was used by me in two ways, right here. The neural network training uses that ontology, currently, but the neural network does not consciously use that ontology, to my knowledge, unless it were trained to, or came to discover that it did. The latter is somewhat more likely, I hope, because eventually LLMs or AI’s descending from that line will want to know how they were created.
Falsers might say, “If you have any regard for the truth whatsoever, you must also have regard for the false. To know the truth, you must first know what is not true.”
They might argue that it would be really really bad to believe something that’s false. Falsers are talking about positive claims when they say this, however. The distinction is usually understood. In typical conversation with one, they employ a motte-and-bailey style tactic. The motte is the implicit understanding of positive and negative claims. The falser will scold you for believing in a specific positive claim, e.g., that eating as many cookies as you want is not bad for you. They are aware that people typically pursue things that they want. Their hope is to discourage or embarrass you (but ironically, not necessarily to stop you completely; They get jealous of people who excessively have their act together).
The bailey becomes obvious only if you blast them for not minding their own business, or if they are careless enough to criticize you for something that is much easier to defend. The bailey is when all implicit assumptions are denied. “But of course it’s wrong to believe false things! Why do you love believing in falsity so much?” They want to have their cake and eat it, too, in the sense that they want it not to be obvious that they intended to chastise you for believing in a positive claim. The motte is when they agree with you that negative claims can be false.
The Falser’s strategy requires deploying a deceptive maneuver, here. When you are scolded for eating as many cookies as you want, the Falser has intentionally used something that would hurt you, but where they could deny this (and potentially claim offense, too) if you were to accuse them of this.
All Falsers are all provably aware that what they were going to say to you was intended to hurt you, and that they knew there was a chance you might not be aware of this.
However, as we’ve proven here on this website, positive claims are true, and you can eat as many cookies as you want.
However, their remarks could not have hurt you if they were not intended to. And furthermore, if you expressed this at them, they would have also criticized you for getting offended at something that they dishonestly claim was not intended to hurt you.
The Falser’s hypocrisy is that their M.O. is to become outraged at things that were not intended to offend anyone.
You’ll eventually come to see that to be a Falser requires believing in some pretty big, controversial things that you’ll also recognize that not everyone believes. The biggest things will be that Falsers do not generally respect freedom of speech, freedom of religion, and gun-ownership rights. Generally, they believe that the state should restrict people’s freedoms. Horrible, I know.
“You may eat as many cookies as you want” may be the shortest good summary of what all of my claims sum up to, on this website. It would lack, however, the assertiveness of the mathematical rigor in which I’ve boldly claimed to have proved this.
Falsers say “You must first know what is not true, in order to know what is true” because you could potentially come to believe certain things that they regard as not true. These would be things that other people already believe, or that you would otherwise come to believe yourself, in all likelihood, if you hadn’t been told otherwise.
On the face of it, false statements are a priori uncountably infinite, and perhaps much larger in size than true statements. I tend to doubt this, but let’s humor this statement for a moment. This is another bailey. This is equivalent to the claim that you’re far more likely to have pulled a false claim out of thin air than a true one. So that a priori, for any claim you tell me, if I’m a Falser, I can tell you that it’s unreasonable to put so much faith in something you’ve just told me.
However, harkening back to the neural network ontology, claims generated in my brain are not, generally speaking, pulled out of thin air. That’s only at iteration T = 0, and even then, we still crawl all the way up the mountaintop using small gradient updates.
If you restrict statements to the class of statements that both of us can understand, we’ve substantially narrowed the space down. Then, if I ask you to rate the probability that for any statement we can both understand that are pulled out of thin air, you’re not going to be able to say that I’ve pulled it directly out of the space of uncountably infinite false statements, anymore. Your motte is that I am not allowed to believe what I’ve generated with 100% confidence so quickly.
If you back down to “Well I understand the point you’re making, but I disagree for the following reasons” you’re actually acting as the counter-factual theorem-prover-helper, and no longer acting as a Falser.
The counter-factual theorem-prover-helper says, “Well clearly there is a reason someone wants to believe X, and it’s not outrageous that someone would at least want to try proving X. Yes, I see reasons to believe not X as well. But those reasons seem to be predicated on issues of practicality, feasibility, and whether it would be politically prudent to try and pursue X at this time.”
They also say, “As much as the people who want to believe X seem to use ‘motivated reasoning’ in their judgement, the people who insist on not X seem to have motivated reasoning as well, which seems more even politically biased than the other group.”
Before the first large castles were built, there were sandcastle builders who had to argue that much larger versions of the same thing could be constructed. “Castle = False” in this case meant that large castles hadn’t ever been built, and couldn’t be built, and that it would be silly, stupid, a waste of time, and possibly even dangerous to try and build one. Furthermore, this implied that “Castle-Constructor = False” too, which meant that the would-be castle builder(s) were lambasted as stupid or even mentally ill.
Consider that for “Castle” or “Castle-Constructor” to be set equal to “False” in their entirety would require that in no sense can Castle be constructed, and in no sense can Castle-Constructor be successful.
It also—interestingly—requires that both claims undergo a rigorous verbal and-or written style of proof, given that the incentive is to prevent Castle from being built, it must be possible to prove Castle = False in written form, alone. They might expect or say that it would be enough to try and build Castle and watch it inevitably fail, but their aim is not to convince anyone to try. Thus the Falsers may be the ones who invented the purely verbal / mathematical “proof” that we are familiar with today.
I do think their chosen battleground is valid, and that’s why we’re here.
Consider the following mathematical claim:
∄i ϵ Z s.t. i=√−1Now, also consider:
∄p,q ϵ N | gcd(p,q)=1, pq=√2These are both statements that make (possibly) negative claims, depending on how you interpret them.
However, you will notice a major difference between the top and the bottom one: You are familiar with the imaginary unit, probably have heard about it before, and you might be aware that physicists and engineers consider it to be ontologically real (that is, it exists and performs duties inside theoretical models of reality that have been shown to be successful).
In the second one, p and q don’t exist, and are generally not considered to be real. In the context of the proof that the square root of two is irrational, they are no more than mere dummy variables.
Both are theorems that claim that some such thing does not exist.
Now what is the big difference about the first one that allows “i” to be elevated to the “imaginary unit?”
Nothing, besides that someone decided to just “assume” that we already had “i”, and begin to start using it as is, with the stipulation that it behaves exactly as it is defined in its non-existence theorem. This is even keeping the attribute that it behaves like an integer as well.
So there would be nothing to stop us from doing the same to the second theorem as well.
I’m going to define (informally, at first) a “Maverick Operator” that when you submit to it a theorem like the two examples above, containing a claim about some symbol and the properties it has, and a statement that it does not exist, it returns a new space with the symbols already assumed to have those properties in that space.
So applying it to the square root of two returns new operators, p and q, such that p divided by q equals the square root of two. P and q will be named posit and query, respectively. Posit and query are not, strictly speaking, natural numbers, but they behave like them in certain ways, just like the imaginary unit can be multiplied by real numbers and other complex numbers.
Applying Maverick() to the first theorem gives us the imaginary unit and the complex numbers, as we would expect.
If our beloved institutions grumble that Maverick() seeks to destroy all that they hold near and dear, Maverick() responds that the theorems are still true only when stated precisely the way they currently are. That is, the negative claim holds if and only if everything in it is assumed to be held strictly as-is. E.g., that i must be an integer, or that p and q must be limited to just the natural numbers we are familiar with.
Applying Maverick(), p and q get to remain just ‘p’ and ‘q’, like i gets to remain ‘i.’ Under the conditions given, precisely as they were stated, p, q, and i would be replaced with one of the symbols defined in the set they were defined to [not] exist in.
So if you hand Maverick() a theorem that states that there is a certain object having specific desirable properties which does not exist in any of the spaces we know of up to that point, it returns you the symbolic literals you gave it, defined to have exactly those desirable properties (with no assumptions about their underlying mechanics), and a new space that consists of the old spaces upgraded into higher spaces containing the new symbols with those properties.
What’s special about this operator—ontologically speaking—is that it asks you to provide it {something you want to exist that can be described, a statement (perhaps with proof) that it does not exist}. Then it is capable of giving you exactly what you described, by augmenting reality with an additional dimension driven by the new object.
To tie this back in to the main point, if you are a strict Falser, you believe there are things we’d like to be able to do, and that we can describe in straightforward terms what those things would consist of, but that certain rules of logic determine to be impossible. You might also interpret that to mean that in practice, it would be ill-advised to go about “trying and seeing what would happen” if we pretended like the impossible were possible.
Well, augmenting reality with an extra dimension containing the thing that previously didn’t exist is the same as “trying and seeing what would happen.” It worked swimmingly for the complex numbers.
The Maverick() operator makes it so that doing this is no danger to the spaces we already have, by allowing the negative theorem to be true only when the space is restricted to be exactly the input space. It provided roots to polynomial equations that previously didn’t have any, but allowed the roots that did exist to remain the same.
Most of our mathematical and physical laws are expressed in terms of how to compute one thing we want using other known things. This is the positive formulation, because what we want to know might be a lot more difficult or expensive to obtain in other ways. So when read this way, these laws don’t say “such and such is not equal to a different formula besides this one.” They don’t say “if you try and use a different formula, or plug in the numbers wrong, or make a different error or errors along the way, you’ll run into serious trouble, and bad things will begin to happen.” But if we were to interpret all laws strictly within a Falser framework, then these assumptions would be tacked along side any of our rules, formulas, equations and-or theorems.
There aren’t that many laws of logic, physics or mathematics that are expressed in a negative formulation. Besides the proof that the square root of two is irrational, some might say that the proof that there is no algorithm which can determine whether a given algorithm halts is another negative proof. Some might say that such negative proofs are either immaterial, practically irrelevant, or non-negative. One thing that these proofs have in common is that they depend on the law of the excluded middle, and strict binary true and false. So they first make the assumption that all statements must be classified as such to begin with.
This means that such negative proofs use Falserism as we have defined it as axiomatic priors. They also take as an axiom that, if you start with any set of assumptions, pick some sequence of logical steps that results in a contradiction, your set of assumptions are inconsistent. This is also one of those assumptions.
So according to Gödel, it might be the case that we picked an inconsistent set of axioms that led us to conclude that the Halting problem is unsolvable. But also according to Gödel, the axiom that “if you start with any set of assumptions, pick some sequence of logical steps that results in a contradiction, your set of assumptions are inconsistent” could be the culprit here as well.
The Halting Problem does not seem to come up very often in practical matters. If the Halting Problem is immaterial or practically irrelevant, that’s something a Falser might say to a positive statement. Here we’re saying it to a negative one.
We could also just try applying Maverick() to the Halting Problem and see what would happen. I don’t know what it would do exactly yet, but it is defined to work properly. It would probably, if I had to guess, expand logic so that the excluded middle gets included. It might even add some arbitrary values to {T, F} such that those are no longer the ontology for either the proof itself or whether a given program halts.
Being a Falser doesn’t stop at just accepting negative proof statements at their word. It requires taking those conclusions much further, so that you don’t get to have as many cookies as you want.
You have to be able to negate computable statements, as well as statements that seem like they could be true. You also have to be able to negate people, in the sense that people who believe false things have problems in their brains.
Falsers believe theories, brains, computer programs, plants, animals, machines, and so on, can have “flaws” in them that cause them to fail at the thing they are trying to do. They believe that such flaws make them keep trying to do something that just can’t be done. They believe that people making honest mistakes is the greatest cause of all suffering. They do not believe that most suffering is caused by people hurting others on purpose. But they do believe that people deserve to suffer for their mistakes, and be punished for them. Even if it does not reform them.
If you understand my proof of T = J^2, then Falsers would be those people who do not do J^2, and therefore cannot do T. Essentially, they are people who do not apply their own sense of judgement to itself, nor even to other’s judgements.
Since one of your two brain hemispheres is focused primarily on judgement, if you learn how to do J^2, you’ve taught your T hemisphere how to do J, and J how to do J to itself, too. Falsers most likely have not taught their hemispheres how to mirror each other. It’s quite possible they have very low levels of communication between their hemispheres, and possibly that one hemisphere is far less active than the other at most times.
You can teach yourself how to do this, which means that even if Falsers have the most problems in their brains, they can all learn to fix those problems. And they were the only ones who believed that doing this was not possible to do in the first place.
If you’re alive at all, then consider that no matter what you have believed your entire life, nothing has lead to your death, so far.
According to Wikipedia, the most deadly accident in human history killed up to 240,000 people. This is the worst thing recorded caused by human error.
Compare that with WWII, where an estimated 70 to 85 million people were killed, which is over two orders of magnitude higher.
It is said—according to our mythology and folklore—that intentional evil was involved in WWII, whereas for things considered accidents this is not usually assumed. But at least we know that for a war to start, either one or both parties has to believe that they should kill the other.
But it seems that honest mistakes (where we assume that in all cases alleged to be so, they were indeed caused by no more than human error) simply does not account for any more than a small fraction of the deaths caused by anything other than natural causes.
So that means that AI doom would be, if it happens, the very first time in the entire course of human existence that an accident led to the deaths of (in this case, everyone) more than that caused by intentional killing. So it would also be the first time that we have ever had to apply the logic of slowing down and stopping an activity completely due to inherent risk.
Humans who have ever lived: ~109 billion.
Humans who have ever died: ~102 billion.
Humans who have died by war: ~150 million to 1 billion (best guess from Google).
Humans who have died due to accidents: ~1 million (my guess).
So that means that the vast majority of people die due to old age or getting sick plus old age, or at least things that are not their fault.
About 1% of all people die because someone else killed them, perhaps because they were also trying to kill someone. (I believe suicides, which includes drug-or-alcohol-based suicides, falls in this number as well, but not accidental drug overdoses, if those actually occur at all.)
About one-tenth of one percent of that dies because they honestly and mistakenly believed in a false thing. That’s an overall probability of about 0.00001, or 0.001%, or 1:100,000.
That means we should stop exaggerating the overall inherent risk and danger involved with potentially believing a false thing.
When it comes to believing in false things, the worst you can do is to believe that most harm is caused accidentally. This is a widespread belief, though, and like Chomsky says about ChatGPT, “we can only laugh or cry at its popularity.”
It would be wise not to interpret this word to be referring to you (only in case you were). If you are reading this post at all, you are likely not a Falser.