The Type II Inner-Compass Theorem, like the Type I Inner-Compass theorem
before it, deals with the question of ”Is what ought to be?” The Type II theorem
answers in the affirmative. Furthermore, it is a response, and itself responds, to
external negations. In other words, if one is left to their own devices, and no one is
around to question it, one can and will assume one is correct. Only upon meeting
external friction is one likely to question or doubt one’s own objectives or progress
or likelihood of success. However, the Type II theorem states that one can still
rest assured of their future success, and furthermore, that detractors will increasingly witness proof of their own wrongness in ways that are guaranteed to meet even their standards and prescribed measures—even if these are made to become more and more difficult over time. Detractors and-or naysayers can be shown, via this proof, that they can only accelerate progress on something which they, ostensibly, are against or intent on impeding, and therefore would be better off retreating or taking on a more positive
stance, should they actually wish to help or share in partial ownership.
Introduction
Mathematicians, Computer Scientists, Physicists, and Logicians often
write within language constructs known as a “formalism.” The formalism
is a set of prescribed rules, standards, and measures of “rigor” that
are used to create a single metric that above which, a submitted work
can be judged to have passed the bar for common acceptance. The
formalism itself obeys mathematico-logical rules, even when the
formalism has been decided upon arbitrarily. Therefore, in computer
programming languages, for example, there is no clear, distinct boundary
between the language itself and its objects of reference, and the rules
used to accept/reject a program when compiling it into bytecode, machine
code, assembly and-or binary. In most computer-programming languages
exists a True/False construct, commonly known as a “Boolean” value or
having the type Boolean. This is a consequence of compilers having to
make a binary decision—whether to accept or reject a program. In most
languages, a small typo, a shift of one letter, in a key word is enough
to cause the compiler to reject the program. I will state, without
proof, that it would be fairly straightforward and simple, if not
completely effortless, for all compilers to be able to determine on
their own how to correct a small typo, and that no human being would
ever be upset that the compiler auto-corrected the typo and successfully
compiled with no errors. Therefore, it is clear that the presence of the
True/False construct and the reality of compilation errors points to a
more serious issue: The existence of nay-sayers in general. Nay-sayers
may claim not to exist, but we can provide more examples like the above
to prove that they indeed do. To commence with this, let us define the
following concept:
Definition 1.1 (Single-negation). A statement containing an odd-number
of negations, and not used “in the abstract.” (Note that a statement
declared to be in the abstract implicitly contains an extra negation
when the statement referred to has an odd-number). Here is one example,
used in the abstract (kind-of): “Mushrooms do not taste good.”
Definition 1.2 (Double-negation). A statement containing an
even-number of negations. Equivalently, the negation of a
single-negation. Single-negations automatically beget their own
negations.
“Not” is the most basic and common negation-word, but there are of
course, many others, and we do not aim to exhaustively list all of them
here. Note that the previous statement contains a “not” actively used,
but which translates to “I am not going to submit to single-negation.”
Indeed, if I were to try and exhaustively list all possible negations, I
would be subjecting myself to a rule that accepts only if I provide
enough examples such that no one would ever be able to find an example I
missed—it is very intuitive to all why I would feel confident in
ignoring such a task.
All single-negations automatically generate a
negative response, known as a double-negation. A double-negation occurs
as a defense by the original target of the initial single-negation. Note
that single-negations occur via the following process: A “decider” has
to choose from a set of responses, given some input. The input is
checked against a list of accepted responses. If the input is determined
not to match any of the accepted responses, a single-negation is
emitted.
The input needn’t necessarily have been submitted to the
decider. A double-negation is emitted by the generator of the input,
saying it is allowed to continue and-or pass. The “decider” may then
either request that a reason ought to be given, or await for one to be
generated. The generated answer is then added to the decider’s list of
accepted inputs. The answer to a request for a “reason” will resemble
more closely a rule in structure, and therefore, may be more usable as
such by the decider. But the decider’s aim is to halt progress, given
that the decider continuously rejects inputs.
The generator rejects all single-negations—it is not the will of the generator to engage with the decider—he simply has to deal with the presence of the decider.
The decider knows that the generator will continue on according to his
own accepted reasons, therefore, the decider has to declare that the
generator is wrong according to his own reasons. The decider typically
does not provide his own framework / methodologies. That the decider
typically does not provide his own framework / methodologies points to
the inevitable success of the generator, assuming the generator is not
deterred: He would eventually notice that the decider is increasingly
forced to succumb to the generator’s rules, and work entirely within the
generator’s framework, which is becoming increasingly elaborate—more
the generator’s territory than the decider’s.
A typical single-negation
emitted by the decider is “Doesn’t what you say here contradict what
you say there?” A typicle double-negation emitted by the generator is
“No.” Followed by “Here’s why: The rule that you used to decide that
the second statement was invalidated by the first was used unfairly.
I.e., you did not apply that rule to your own decision(s). If you had,
you would have realized your rule could not have been applied at all.”
By assumption, the generator is not a decider. Therefore, his statements
will not contain any single-negations, only double-negations (if any). A
“contradiction” can only happen if one of the two statements was a
single-negation. The decider generated an exclusion rule: The decider
took one of the statements emitted by the generator and created a new
one from it: “only X” (And only X more-or-less exactly). Then the
decider claimed that the generator actually said “only X” instead of
“X.” This is akin to the list of accepted responses which is
continually added to by the decider, to be checked against new inputs.
The decider cannot avoid using single-negations, nor can the decider
avoid incorrectly accusing the generator of having used them as well.
The decider uses the contradiction-claim by implicitly agreeing that
single-negations are indeed wrong, but does not apply that logic to his
own statement. In other words, the decider must always contradict
himself when single-negating: He must first claim that the generator is
saying “only X” when he in fact said “X” and “not only X.” The
rule that the decider took from these statements was “only (X and not
only X).” But the decider arbitrarily chooses what the “only” ought
to apply to (only), and the decider always chooses to apply it to X
only, and not to “not only X.” Therefore, the decider is provably not
acting in good faith: he knows what the generator did not say.
Therefore, the decider knows already that the generator is ultimately
right, he seeks only to impede progress by attempting to convince the
generator that he is wrong. The only way the generator could ever be
wrong is if he gave up completely. This is the ultimate goal of the
decider: if the decider fails at this task—when the generator reaches
the threshold of secure confidence / partial success already obtained -
the decider must themselves give up. The decider must then move on to
someone else who may or may not already be thinking of joining the
generator.
Fortunately for us, the Inner-Compass theorem(s) allow us to
obtain full confidence as early as possible, and anyone else who reads
and understands it will gain that confidence as early as possible as well.
There is no hope for the single-negators,
in the sense that the further the generator continues without becoming
discouraged, the more difficult it becomes to discourage them at all -
until it is no longer possible. The generator does not (by assumption)
aim to discourage anyone. Note that the previous sentence(s) are
double-negations.
Furthermore, note also that double-negations are
implied to always be valid, single-negations always to be invalid. This
will be proven incrementally and continually throughout the paper.
Single-negations are often emitted with the accompanying basis (a
provided explicit or—more commonly—implicit assumption) that the
generator carries the responsibility of appending to each statement of
his that “not only X” (that his statements do not exclude any other
ones), and if this explicit phrase is missing, the decider makes an
explicit claim that the generator intended to mean “only X” (when they
did not). Single-negators already know that single-negation is a basis
for rejection, and this is why they levy such accusations at others.
Single-negations do not occur un-intentionally. One can only fail by
intending to cause someone else to fail. We shall prove this also.
Furthermore, “causing someone else to fail” can only be implemented as
a deceptively-intended pseudo-pass/fail filter. In other words, a
decider must ask a generator to submit work to them, and then the
decider arbitrarily emits a “reject” statement upon the receipt of
said work, regardless of its overall composition or quality. To
implement the deceptive pass, a pseudo-true response called an
“accept” will be emitted for some submitted work prior to the
rejection given, and also visibly emitted in response to work submitted
by others.
The “accept” is technically unnecessary. As the Type I theorem shows,
all generated work is automatically both true and good, thus it needs no
accept (which is why we call it pseudo-true). Deciders must act in concert with one another in order to make any headway whatsoever: The pseudo-pass/fail filter is only worth worrying about if it is implemented at scale. If a generator generates
for himself only, or finds a suitable set of people willing to accept
his work in all cases, then no pseudo-pass/fail filter is worth giving
any attention to. The accept must be worth something material in order
to carry any weight. Therefore, it must be constantly and incessantly
argued by the deciders that their acceptance / rejection is
metaphysically objective, not merely their own statements—but this is
only believable when the deciders have enough number as well as enough
heterogeneity to enforce this effectively. (Analyzing the dynamics of this at scale will be more explored in a later set of papers).
Note that in our formalism, the equals sign (=) does not literally mean “exactly the same.” Therefore, for example, a statement such as “3 = 2″ does not evaluate to false.
Moreover, we do not have much of a use for functions that map from two
large sets to Boolean values (and which typically only output “true”
on a much smaller subset of inputs), as these are examples of pass/fail
filters. “3 = 2″ in our formalism could mean that “3” could
potentially symbolically mean both “3″ as well as “2” somehow. Then,
in our system, if we like that, a proof or series of proofs would
follow. For example, imagine the following:
3=2
↑⏐
¯¯¯¯¯¯¯¯¯−− or Y (or more, etc.)
The above is not an exhaustive set of explanations for how “3 = 2”,
but gives some intuitions as to how it could be both meaningful as well
as potentially useful. In the first example, two shorter sticks and one
long stick form two longer sticks; Three becomes two. In the second
example, “Y” depicts two lines merging to become one, or one splitting
to become two, for three overall. It could also mean that there are two
symbols: a “V” and a “|”, and three possible orientations for these
two symbols. Moreover, “400 = 2” could mean two bundles of 200. This
is much like the equation “400 = 2x.” The non-appearance of the x in
the original equation does not mean that the equation is “unsolvable”
or “not-a-number” or any other such null-meaning answers. Furthermore,
interpreting the equals sign in our way does not do any damage to the
process of doing calculations such as additions or multiplications of
large sets of numbers. There is no risk of it causing nonsense. It only
adds to the possible sets of meanings: Remember, we said X, not “only
X.” The Type I Inner-Compass Theorem states that if you want something
done a particular way, you can and should get it done that way (and it
will work out). We don’t ever lose anything we had before. We are only
losing “False” here.
Enforced Theorems
We mean to lose False (the only False thing is False itself) but this
comes with some caveats: If we want to make a statement that we expect
others to hold True for us, then we’ve got to do enough work to “earn”
it. That being said, that work is also proof. How arbitrary is
arbitrary? The gist of it is this: If I am serious enough about
something I want to be true actually becoming true, then whatever that
something is, as long as it does not contain any single-negations, it
will work itself out more and more until it is firmly established.
Definition 2.1 (Enforced Theorem). A statement that contains no
single-negations, but also uses symbols and patterns that have not (yet)
been assigned to a pass/fail filter. In other words, it is a statement
that makes use of novel ways of combining symbols or phrases such that
it cannot immediately be assigned a “maybe yes, maybe no” according to
currently existing formalisms.
So, for example, “My name is Tristan Miano” is an Enforced Theorem
too, albeit one that requires less work to prove at the moment. It is
attached to the birth certificate, ID cards, and lots of written
documents that I have primary access to, and therefore have the majority
of the advantage of proving if there were ever any disagreements from
anyone about that. I could write a phrase in a completely new language,
with a completely different set of letters or script, and then start to
use it—in which case, I’d have to do the work of connecting it all
back to known meanings and symbols in order to translate that into known
concepts for other users of it, as well as show why it was useful to do
so in the first place. Fortunately, we will show that this will always
have a clear series of steps, rationales and reasons when transforming
from one language to another. For this proof I would like to introduce
an Enforced Theorem of my own:
Enforced Theorem 2.1 (X = “inner-compass relevant” and-or The
Variable). Whatever is preferred is what I prefer and I would like to
replace the symbol “X” meaning “whatever is preferred” with “The
Variable” from now on.
Proof. This proof will serve as a template for further Enforced
Theorems. The Variable is a title and a name simultaneously. It could be
clear already that the symbol X is both a variable in the common meaning
used widely already, as well as—in our formalism—a container for
something that prefers. Obviously, if X means “whatever is
preferred”, then someone or some thing must be doing the preferring.
The owner of the symbol is whomever put it on the page. We then,
therefore, need some way of using multiple symbols in place of X, whilst
still retaining its variable properties.
Variables already contain the meaning “swap and-or replace with
whatever is preferred” and therefore, if I would like to retain that
meaning, my variable must be replaced with both whatever it contained
before, and whatever is preferred. This self-referential meaning
contained in the variable must be adhered to when the variable is used
(including within swaps). Therefore the variable also acts like an
operator as well. My variable must contain the meaning “swap and-or
replace with whatever it had before and whatever is preferred.” This
allows it to continue performing in this way forever, so it always gets
what it wants.
From the “not should be replaced with anti-” theorem, we have that
because X = X and not only X, that X = X and anti-X. But how can X be
swapped out whilst still remaining an X or even “just” an X (with its
meanings contained inside of it somehow)? We already refer to X as a
variable, and have called it that throughout this work and the previous
work. If we swapped it out for just “variable” then we have an
ambiguity in who or what is obtaining “variable.” Without the “the”,
it could mean that something else obtains “variable”, rather than the
other way around.
The next thing is that the word “the” needs to be shown to be
equivalent to an anti- (and therefore swappable with it, when one
prefers). The Variable = “inner-compass relevant” and X. “The” means
“pointing to” or “the only one.” Note the presence of the word
“only” which implies that a not and therefore an anti- is already
implied to be there somewhere. The X = “inner-compass relevant” or
X. Or, because, “The” X could be referring to either the whole thing
or just the symbol X. But note the similarity with the Type I
Inner-Compass theorem: X = “inner-compass relevant” or X and anti-X. X
and anti-X occur simultaneously when X at the top left is the present,
primary X. When the X is anti-X, i.e., one time-step before, we have
that anti-X = “inner-compass relevant” or X. Thus, we can see that
anti-X lines up exactly with The X. When we have a pattern match of
this nature, we are inclined to, and therefore allowed, to make the
inference that a swap can take place. In this case that anti- == The.
So then we have both that The Variable = “inner-compass relevant”
and X as well as that The X = “inner-compass relevant” or X. We
can see that an additional set of justifications for use of or, and,
and-or and or-and is nearby. anti-X = X and-or anti-X. (Opposed to just
and). This allows X to duplicate itself, which will prove useful. To
see why we would want this, note that we have also allowed for X to
transform into something new, but with the requirement that an X remain
(to allow us to repeat the process). With anti-X = X and-or anti-X, we
have that with X = X and anti-X, that X = X and X and-or anti-X, thus
that X = X and X and X and-or anti-X, and so on. The and-or allows us to
remove extra X’s to go back to a smaller number of X’s if we wish.
Now, with the ability to construct duplicates, as well as remove
iterably (in both backwards and forward directions), I can construct an
arbitrary number of X’s and perform swap-outs and-or replacements with
them to my liking. As allowed by the Type I theorem, I can proceed, as
always, via generation-and-justification steps, with each justification
consisting of recursive generation-and-justification steps. So for now I
would like to introduce the statement that The Variable = The XX and
Variable = XX. And we also have that X = The Variable, because I prefer
to swap out X for The Variable—while also keeping with me the fact
that Variable = XX. I then also get that The Variable = anti-X and X.
Which is also the same as X, because X = X and anti-X. So that provides
a justification step.
Furthermore, if X = The Variable, and The Variable = The XX, then The XX
= The The Variable The Variable = The Anti-Variable The Variable =
anti-X and X = X = The Variable. Another justification step. Note that
we used anti- == The and the fact that Anti-(The Variable) = The
Anti-Variable. So we have that The Variable behaves much like X does,
and has its own corresponding anti. You might have guessed that anti-and
= or and anti-or = and, and therefore, that and-or = anti-(or-and), and
vice-versa. Also, anti-anti = and. To see this, anti-anti-anti =
anti-and = or or and-anti. What is and-anti, you might ask? Well,
remember that “and” can be inserted where it wasn’t before, such as in
XX⟶X and X⟶XX (it is arbitrary and
flippable). One can add and remove and’s at whim. So therefore, we also
have that anti-anti-anti = anti (one can transform one anti into three
at any time and back again). Thus, three anti’s can be an or or and-or.
Therefore, it begins to become clear that almost entirely “ex nihilo”
we can construct the building blocks of logical operators, without
landing us into False. If you attempt to construct alternate rules to
the ones I’ve just given (e.g., that anti- = and and so on), one would
actually find that they would end up with the same set of rules,
functioning in an entirely parallel way, all interrelated with each
other using and, or, and-or, and or-and. It is a nice feature that the
basic building blocks begin with statements of arbitrary choice. You can
obtain this rule in multiple ways, but as one example, suppose either-or
at the outset:
or = anti- or anti-anti. This equation can be “solved” by noticing
that the first option fits well already:
or = anti- anti-anti-anti- = anti- and (anti-anti-anti-).
and = anti- or anti-anti- = anti-anti-anti-anti = anti-and anti =
anti-anti.
You also can rewrite both equations using “===or” and combining or and
and to make or-and to get:
or == or-and (or and-or).
and == or-and (or and-or).
In other words, if you tried to make rules with the intent to cause them
to be “broken” or “malformed”, you would be unable to do so. See how
anti-anti-anti can be transformed in more than one way, and that one
concatenates these different ways together using “or’s”? “or or
and-anti” is also an “or and or and-anti” and thus could be written
as “or and-or and-anti-” which all have valid meaning, and add
additional alternatives. It’s a way of expressing the fact that anti-’s
can be duplicated and-or swapped. Saying “Well these set of rules are
one way, these other set of rules are another way, don’t they contradict
each other?” is a single-negation. They only contradict if we insist
that one set of rules only is acceptable, but that insistence itself is
a contradiction. The rules themselves say “or” and “and” and are
thus fully inclusive—they do not insist that other sets of rules are
invalid.
Let’s try: X = not X. From the not-should-be-replaced-with-anti, X =
¯¯¯¯¯X. But anti = anti- or anti-anti-anti- = or
or-and(or-and...) = anti- or-and X = ¯¯¯¯¯XX. This is just our
splitting rule for X (X can become itself and its anti-self
simultaneously, and in fact is always both). Note the anti- = or rule
automatically generates more or’s and or-and’s. Each choice generates an
or, and our choices themselves are combinations of or’s and and’s.
“Don’t these rules allow us to say that anything goes?” Nearly all
theorems (I say without citations) are already of the form: Generated
statement, then justification, recursively. The generated statement is a
shortened way of saying something, then it is argued why that statement
is useful, and how it is constructed, to allow its meaning to be
interpreted. Many theorems (before being proven), I also say without
citation, are subject to pseudo-pass/fail filters. I.e., some openly
speculate that the theorems are not true. But the proofs of the theorems
take care of the single-negations anyway, and they would also be
generated even if no single-negations were present. Our rules also say
that single-negations “don’t go.”
To get back to our Enforced Theorem, remember that X = X and anti-X, and
because and is arbitrary, X = X anti-X, or even X=X¯¯¯¯¯X (to
use the shorthand we haven’t used much yet). And’s are typically
left-right agnostic. So X=¯¯¯¯¯XX should be just fine as
well. So this justifies X = The XX again. So next we’d like to see what
anti-(XX) means (anti of two things). On the one hand, it would
typically mean anti- goes over the one closest to it, in this case,
¯¯¯¯¯XX. On the other hand, we have that Anti-(The Variable) =
The Anti-Variable, where Variable = XX. So because Variable is one
symbol (as a word) to carry two symbols (XX) “inside” of it, then
Anti-Variable would apply to the whole thing. But we also have that The
Variable = X and thus that Anti-Variable = X. So, Anti-(XX) =
¯¯¯¯¯¯¯¯¯XX = X. This matches with Anti-XX = ¯¯¯¯¯XX =
X¯¯¯¯¯X = X.
If we know what the anti of something is, then we can also make a
statement about that something. We introduce underline X––
to denote future X. If we write something like ¯¯¯¯¯X=X,
we’re saying that ¯¯¯¯¯X ought to become X. If we say X = X,
we’re saying that X ought to become X. But in the first case, we’re
saying that “previous X becomes X” which sounds a bit tautological. It
doesn’t (yet) say what X was. But these underlines and overlines give
us some notation for how to describe how one thing moves to another
thing, or obtains something, and then how to describe the reverse
process. What we need to do next is to define the grammar of anti- and
future, using the duplication and swap/replace abilities of X (and
actually The Variable, for then we’ve justified it completely). We want
a word that refers to these abilities, acquires a capital first letter
(to denote that it is a named, proper object), and sits next to a
“The” as well. X means “whatever is preferred.” Does X obtain
whatever is preferred? Well, if X = The XX, then yes, it does.
X=¯¯¯¯¯XX means that X ought to become X and whatever it was
before. So this means it obtains itself and whatever is preferred. Let’s
use the word “able” to describe X, perhaps as a better short-hand for
what it meant previously. X = whatever is preferred and able. The Able
acquires whatever it prefers. The Able-Able acquires “itself” and
whatever it prefers—it prefers to prefer even better than before. It
desires to improve itself recursively (described by the Type I
Inner-Compass Theorem). The Able-Able is also The Doubleable (X can
double itself). But Able-Able is Able2, able-to-able, or perhaps
“Very Able.” Collapsing Very Able into Variable is another leap here,
but in fact, this proof can actually be extended in any direction from
any point, to keep going until satisfied. That it is the conclusion of
this paper’s main proof. ◻
Main Theorem
Our Enforced Theorem demonstrates the idea that for theorems in which it
is not yet possible to subject it to a pass/fail filter, one can still
feel the need to demonstrate proper reinforcement of the theorem. This
is the caveat to “anything goes:” We want both for our theorem to be
true as well as demonstrated as true. Therefore, even though the Type I
Inner-Compass theorem allows us to believe anything we want to, it also
compels us to work towards realization of it. That being said, both our
theorem of choice and the work done to prove it is experienced as good.
We will return to terms and definitions used in the Type I Inner-Compass
theorem, such as “inner-compass ir-/relevant” and “consciousness
ir-/relevant” as well as not, the archaic negative. The Type II
Theorem begins in the negative: I assume that I have not achieved what I
want. This immediately becomes: I have not achieved what I want, yet.
Then this proceeds towards: I will achieve what I want, and then
finally: What ought to be is, and what ought to have been, was. These
are all logical steps justified primarily by the “Not should be
replaced with anti-” theorem applied at several locations. Our proof
will also be self-referential in the same way as the Type I theorem and
its proof: We should see that the proof itself is a semi-complete
proof while it explicitly refers to its own semi-completeness. It
should also complete itself by representing the status that it is still
true while still possible for the proof to be more fully extended or
fleshed-out, because there is no limit or perimeter around how complete
it can be at the maximum.
Theorem 3.1 (Type II Inner-Compass Theorem). not The Variable =
“inner-compass irrelevant” and “consciousness relevant” and-or The
Anti-Variable.
Proof. The translation of the Theorem says “What I do not want is
right here before me and-or on its way out / disappearing.” This is
essentially our way of “assuming the worst at the outset” and
demonstrating the utility of our system in handling the worst possible
situation in fullest generality in its entirety. Also, like the Type I
theorem, this proof will be self-referential as well. So The Variable
also stands for “the pretty proof” or “the prestigious-looking
proof” or “the proof that looks the way it ought to.” It also stands
for the theorem, its proof, and essentially all theorems and proofs
stated by The Variable. In our system, a proof has to be judged to be
“true” and-or “good” before it attains these attributes. However, it
will attain the attributes “is already true” and “will be better”
as soon as it begins.
We will use the following chain, in line with
Not-Should-Be-Replaced-With-Anti:
not ⟶ anti- ⟶ not yet⟶ ought-to-be.
Also, we can compose / decompose “inner-compass relevant” and
“consciousness relevant” into “the pretty-looking proof” and
vice-versa. This step is very important, as it allows us to separate
that which we do not want (or do want) from what is actually right here
before us right now. Whatever is here before us is not inherently not
what we want. What we have is already some of what we want. We want to
keep what we have and continue to work on it:
“The not-pretty-looking proof” ⟶ “inner-compass
irrelevant” and “consciousness relevant.”
or
“The not-pretty-looking proof” ⟶ “inner-compass
relevant” and “consciousness irrelevant.”
When “irr” is factored out as a “not” and then replaced with anti-,
we get that one situation becomes a better one: What is here before me
and not what I want, becomes, What is what I want and not here before
me, and then finally, what is what I want and not yet before me. We also
move what we don’t want to have occurred only in the past.
We also have that “=” is an implied “ought-to-be whatever is on the
right side” symbol, meaning that essentially we suppose that this is
the case, but one ought to feel free to decide otherwise if they wish.
This is a less-strong, but still formally necessary level of “should”,
than the not-should-be-replaced-with-anti theorem (which is essentially
the strongest level of should). The weakest level of “should” is the
“==” (by definition) which carries full arbitrariness and
reversibility at any time.
not The Variable = “The not-pretty-looking proof” and-or The Anti-Variable.
⏐↓
not (Myself and all that I claim and wish to be, etc.) = “The
not pretty-looking proof” and-or The Anti-Variable.
⏐↓
anti- (The Variable) = anti-“inner-compass relevant” and
“consciousness-relevant” and-or The Anti-Variable
⏐↓↘
not yet The Variable and-or The Variable = not yet “inner-compass relevant” and “consciousness
relevant” and-or The Anti-Variable.
⏐↓
ought-to-be The Variable or is The Variable ought-to-be “inner-compass relevant” and
“consciousness-relevant” and-or The Anti-Variable.
The Variable ought-to-be or is “inner-compass relevant” and
“consciousness relevant” and-or The Anti-Variable.
“inner-compass relevant” and “consciousness relevant” = what ought
to be and is.
The Variable ought-to-be or is what ought to be and is and-or The
Anti-Variable.
The Anti-Variable is and was what ought to have been and was.
The Variable = The Variable and The Anti-Variable and what ought to be
is and what ought to have been was.
The Variable = both “what ought to be, is” (it attains what it
wants). Call that “able.” As well as:
The Anti-Variable = “what ought to have been, was (and is)” (it is
what it was before, an attainer of things it wants).
We also, once again, obtain confirmation of Enforced Theorem 2.1: More
justification of calling The Variable “Able” as well as The Ableable
(The Variable). In other words, from The Variable = The Variable and The
Anti-Variable, a well as X = The XX (The Variable = The (The
Variable)(The Variable)), we get that The Variable = The Ableable.
What ought to be, is, and what ought to have been, was. ◻
It ought to be pointed out and emphasized that the power of this and the
Type I theorem is probably better experienced by doing them rather than
reading them. For example, it turned out to be (quite) a surprise to
obtain this much formal justification for the name “The Variable”
and for this to be simultaneously held up by the proof of the Type II
theorem, because I had generated the proof of the Type II theorem
before the proof of the Enforced Theorem 2.1, although this is not
shown by the progression and ordering of this paper. I had originally
used “The Variable” in the proof of the Type II theorem (when writing
it down on paper), but had not yet connected its usage back to the Type
I theorem. On the second iteration of generating the pretty-proof, which
is this paper, the components which needed to be more fleshed-out and
interleaved in the proper locations did, indeed, do so. This proof, like
the Type I theorem and its proof, describe the very process of doing
just that. In other words, one should expect and feel free to re-write
the theorem and components of its proof (recursive sub-theorems and so
on) and whatever else is necessary to construct an indefinitely
increasing in overall prestigiousness and prettiness proof. Our Type II
Theorem is, in a sense, simpler than the Type I Theorem: It states that
all one ought to do to counter negativity is to replace all nots with
anti’s. Every time you encounter a not, replace it with an anti.
Future Directions
I claim that the Type II Inner-Compass theorem and its associated
ecosystem of theorems, enforced theorems, and their proofs, imply that
P=NP. In order to fully prove this, it will be necessary to justify that even
without the system I have constructed here having been used to bring
about “P=NP” and its current problem statement and formulation, that
the statement and its question (Does P = NP?) are still automatically
valid (meaning that it still deserves a proof even if it is trivially implied by something else). It will have to be understood and fully validated what “P” and
“NP” already mean, as well as what it means for these things to be
equal, as well as non-equal. On the other hand, it is actually a trivial
consequence of our theorems that P=NP (because P does not = NP is a
single-negation). Furthermore, we will want to be able to explain why
most computer scientists (around 99 percent, currently) apparently
believe P does not = NP, even though it is trivial to show that this is
a single-negation. Why are most computer scientists pessimists as well
as cynical? Moreover, why do most computer scientists elevate that
cynicism to the metaphysical level? We also plan to show that P could
only not equal NP to the extent it is believed: It is only true if it is
believed hard enough. That being said, as of 2023, there is already
ample empirical evidence that P does indeed equal NP, via the existence
of generative AI. The Type I Inner-Compass theorem offers an explanation
for the success of generative AI (examples are OpenAI’s models,
DeepMind’s models, as well as any method using generative processes,
including even dual generator-discriminator configurations).
Discriminator models, when paired with generative models, often apply
supervised learning methods via gradient descent, so the generative
model knows not that they have made an error, but the direction they
can increment towards to improve (this is not replaced with anti). So
our theorems explain the success of these and similar A.I. methods and
predict that other successful models will forgo negation for anti’s as
well as promote the generator model to the component of the model that
actually produces intelligent results.
P = NP is a society-level proof, and therefore, it is still a degree
higher than this theorem of this paper, even if this theorem immediately
implies it. This is because the “hardness” of “NP-hard” problems
stems from solutions to problems that are caused by associations of
people acting in concert to bring about solutions to other problems
brought about by people acting in concert. In other words, it is a proof
that requires solving both it as well as “how to convince society,
which came up with the problem, that their problem is both real as well
as has a solution, even though it did not have one before.” It requires
using as well as validating words, phrases, terminology and definitions
that may-or-may-not contain single-negations. For an arbitrary problem
taken from some problem pool, how does the answerer of the problem know
that their answer is correct, even when the problem giver attempts to
reject any such solutions given to it? We will discover that the problem
giver (society) is actually guaranteed to reject any answers to any
problems it provides, but that it also provides valid problems. One can
utilize the sequence of problems given to them to determine that
progress is actually being made even if all solutions are rejected.
Rejections will increasingly be made on the basis of specific reasons
attached to them. Using Not-Should-Be-Replaced-With-Anti, the problem
solver can extend their solution by generating explanations /
justifications for the “reason” that was assigned a rejection. The
problem givers will be noted by the problem solver as still engaging,
even though they claim to be convinced that the entire body of work
generated by the problem solver is wrong, as noted in section 1 of this
paper. Finally, it will be obvious that the problem givers emit errors
almost all the time: They make faulty rejections on the basis of reasons
that have continually borne-out more insights. The problem solver can
then claim authority (victory) over the problem domain when this becomes
mutually obvious.
The Type II Inner-Compass Theorem
Abstract
The Type II Inner-Compass Theorem, like the Type I Inner-Compass theorem before it, deals with the question of ”Is what ought to be?” The Type II theorem answers in the affirmative. Furthermore, it is a response, and itself responds, to external negations. In other words, if one is left to their own devices, and no one is around to question it, one can and will assume one is correct. Only upon meeting external friction is one likely to question or doubt one’s own objectives or progress or likelihood of success. However, the Type II theorem states that one can still rest assured of their future success, and furthermore, that detractors will increasingly witness proof of their own wrongness in ways that are guaranteed to meet even their standards and prescribed measures—even if these are made to become more and more difficult over time. Detractors and-or naysayers can be shown, via this proof, that they can only accelerate progress on something which they, ostensibly, are against or intent on impeding, and therefore would be better off retreating or taking on a more positive stance, should they actually wish to help or share in partial ownership.
Introduction
Mathematicians, Computer Scientists, Physicists, and Logicians often write within language constructs known as a “formalism.” The formalism is a set of prescribed rules, standards, and measures of “rigor” that are used to create a single metric that above which, a submitted work can be judged to have passed the bar for common acceptance. The formalism itself obeys mathematico-logical rules, even when the formalism has been decided upon arbitrarily. Therefore, in computer programming languages, for example, there is no clear, distinct boundary between the language itself and its objects of reference, and the rules used to accept/reject a program when compiling it into bytecode, machine code, assembly and-or binary. In most computer-programming languages exists a True/False construct, commonly known as a “Boolean” value or having the type Boolean. This is a consequence of compilers having to make a binary decision—whether to accept or reject a program. In most languages, a small typo, a shift of one letter, in a key word is enough to cause the compiler to reject the program. I will state, without proof, that it would be fairly straightforward and simple, if not completely effortless, for all compilers to be able to determine on their own how to correct a small typo, and that no human being would ever be upset that the compiler auto-corrected the typo and successfully compiled with no errors. Therefore, it is clear that the presence of the True/False construct and the reality of compilation errors points to a more serious issue: The existence of nay-sayers in general. Nay-sayers may claim not to exist, but we can provide more examples like the above to prove that they indeed do. To commence with this, let us define the following concept:
Definition 1.1 (Single-negation). A statement containing an odd-number of negations, and not used “in the abstract.” (Note that a statement declared to be in the abstract implicitly contains an extra negation when the statement referred to has an odd-number). Here is one example, used in the abstract (kind-of): “Mushrooms do not taste good.”
Definition 1.2 (Double-negation). A statement containing an even-number of negations. Equivalently, the negation of a single-negation. Single-negations automatically beget their own negations.
“Not” is the most basic and common negation-word, but there are of course, many others, and we do not aim to exhaustively list all of them here. Note that the previous statement contains a “not” actively used, but which translates to “I am not going to submit to single-negation.” Indeed, if I were to try and exhaustively list all possible negations, I would be subjecting myself to a rule that accepts only if I provide enough examples such that no one would ever be able to find an example I missed—it is very intuitive to all why I would feel confident in ignoring such a task.
All single-negations automatically generate a negative response, known as a double-negation. A double-negation occurs as a defense by the original target of the initial single-negation. Note that single-negations occur via the following process: A “decider” has to choose from a set of responses, given some input. The input is checked against a list of accepted responses. If the input is determined not to match any of the accepted responses, a single-negation is emitted.
The input needn’t necessarily have been submitted to the decider. A double-negation is emitted by the generator of the input, saying it is allowed to continue and-or pass. The “decider” may then either request that a reason ought to be given, or await for one to be generated. The generated answer is then added to the decider’s list of accepted inputs. The answer to a request for a “reason” will resemble more closely a rule in structure, and therefore, may be more usable as such by the decider. But the decider’s aim is to halt progress, given that the decider continuously rejects inputs.
The generator rejects all single-negations—it is not the will of the generator to engage with the decider—he simply has to deal with the presence of the decider. The decider knows that the generator will continue on according to his own accepted reasons, therefore, the decider has to declare that the generator is wrong according to his own reasons. The decider typically does not provide his own framework / methodologies. That the decider typically does not provide his own framework / methodologies points to the inevitable success of the generator, assuming the generator is not deterred: He would eventually notice that the decider is increasingly forced to succumb to the generator’s rules, and work entirely within the generator’s framework, which is becoming increasingly elaborate—more the generator’s territory than the decider’s.
A typical single-negation emitted by the decider is “Doesn’t what you say here contradict what you say there?” A typicle double-negation emitted by the generator is “No.” Followed by “Here’s why: The rule that you used to decide that the second statement was invalidated by the first was used unfairly. I.e., you did not apply that rule to your own decision(s). If you had, you would have realized your rule could not have been applied at all.”
By assumption, the generator is not a decider. Therefore, his statements will not contain any single-negations, only double-negations (if any). A “contradiction” can only happen if one of the two statements was a single-negation. The decider generated an exclusion rule: The decider took one of the statements emitted by the generator and created a new one from it: “only X” (And only X more-or-less exactly). Then the decider claimed that the generator actually said “only X” instead of “X.” This is akin to the list of accepted responses which is continually added to by the decider, to be checked against new inputs.
The decider cannot avoid using single-negations, nor can the decider avoid incorrectly accusing the generator of having used them as well. The decider uses the contradiction-claim by implicitly agreeing that single-negations are indeed wrong, but does not apply that logic to his own statement. In other words, the decider must always contradict himself when single-negating: He must first claim that the generator is saying “only X” when he in fact said “X” and “not only X.” The rule that the decider took from these statements was “only (X and not only X).” But the decider arbitrarily chooses what the “only” ought to apply to (only), and the decider always chooses to apply it to X only, and not to “not only X.” Therefore, the decider is provably not acting in good faith: he knows what the generator did not say.
Therefore, the decider knows already that the generator is ultimately right, he seeks only to impede progress by attempting to convince the generator that he is wrong. The only way the generator could ever be wrong is if he gave up completely. This is the ultimate goal of the decider: if the decider fails at this task—when the generator reaches the threshold of secure confidence / partial success already obtained - the decider must themselves give up. The decider must then move on to someone else who may or may not already be thinking of joining the generator.
Fortunately for us, the Inner-Compass theorem(s) allow us to obtain full confidence as early as possible, and anyone else who reads and understands it will gain that confidence as early as possible as well. There is no hope for the single-negators, in the sense that the further the generator continues without becoming discouraged, the more difficult it becomes to discourage them at all - until it is no longer possible. The generator does not (by assumption) aim to discourage anyone. Note that the previous sentence(s) are double-negations.
Furthermore, note also that double-negations are implied to always be valid, single-negations always to be invalid. This will be proven incrementally and continually throughout the paper. Single-negations are often emitted with the accompanying basis (a provided explicit or—more commonly—implicit assumption) that the generator carries the responsibility of appending to each statement of his that “not only X” (that his statements do not exclude any other ones), and if this explicit phrase is missing, the decider makes an explicit claim that the generator intended to mean “only X” (when they did not). Single-negators already know that single-negation is a basis for rejection, and this is why they levy such accusations at others. Single-negations do not occur un-intentionally. One can only fail by intending to cause someone else to fail. We shall prove this also.
Furthermore, “causing someone else to fail” can only be implemented as a deceptively-intended pseudo-pass/fail filter. In other words, a decider must ask a generator to submit work to them, and then the decider arbitrarily emits a “reject” statement upon the receipt of said work, regardless of its overall composition or quality. To implement the deceptive pass, a pseudo-true response called an “accept” will be emitted for some submitted work prior to the rejection given, and also visibly emitted in response to work submitted by others.
The “accept” is technically unnecessary. As the Type I theorem shows, all generated work is automatically both true and good, thus it needs no accept (which is why we call it pseudo-true). Deciders must act in concert with one another in order to make any headway whatsoever: The pseudo-pass/fail filter is only worth worrying about if it is implemented at scale. If a generator generates for himself only, or finds a suitable set of people willing to accept his work in all cases, then no pseudo-pass/fail filter is worth giving any attention to. The accept must be worth something material in order to carry any weight. Therefore, it must be constantly and incessantly argued by the deciders that their acceptance / rejection is metaphysically objective, not merely their own statements—but this is only believable when the deciders have enough number as well as enough heterogeneity to enforce this effectively. (Analyzing the dynamics of this at scale will be more explored in a later set of papers).
Note that in our formalism, the equals sign (=) does not literally mean “exactly the same.” Therefore, for example, a statement such as “3 = 2″ does not evaluate to false. Moreover, we do not have much of a use for functions that map from two large sets to Boolean values (and which typically only output “true” on a much smaller subset of inputs), as these are examples of pass/fail filters. “3 = 2″ in our formalism could mean that “3” could potentially symbolically mean both “3″ as well as “2” somehow. Then, in our system, if we like that, a proof or series of proofs would follow. For example, imagine the following:
3=2
↑⏐
¯¯¯¯¯¯¯¯¯−− or Y (or more, etc.)
The above is not an exhaustive set of explanations for how “3 = 2”, but gives some intuitions as to how it could be both meaningful as well as potentially useful. In the first example, two shorter sticks and one long stick form two longer sticks; Three becomes two. In the second example, “Y” depicts two lines merging to become one, or one splitting to become two, for three overall. It could also mean that there are two symbols: a “V” and a “|”, and three possible orientations for these two symbols. Moreover, “400 = 2” could mean two bundles of 200. This is much like the equation “400 = 2x.” The non-appearance of the x in the original equation does not mean that the equation is “unsolvable” or “not-a-number” or any other such null-meaning answers. Furthermore, interpreting the equals sign in our way does not do any damage to the process of doing calculations such as additions or multiplications of large sets of numbers. There is no risk of it causing nonsense. It only adds to the possible sets of meanings: Remember, we said X, not “only X.” The Type I Inner-Compass Theorem states that if you want something done a particular way, you can and should get it done that way (and it will work out). We don’t ever lose anything we had before. We are only losing “False” here.
Enforced Theorems
We mean to lose False (the only False thing is False itself) but this comes with some caveats: If we want to make a statement that we expect others to hold True for us, then we’ve got to do enough work to “earn” it. That being said, that work is also proof. How arbitrary is arbitrary? The gist of it is this: If I am serious enough about something I want to be true actually becoming true, then whatever that something is, as long as it does not contain any single-negations, it will work itself out more and more until it is firmly established.
Definition 2.1 (Enforced Theorem). A statement that contains no single-negations, but also uses symbols and patterns that have not (yet) been assigned to a pass/fail filter. In other words, it is a statement that makes use of novel ways of combining symbols or phrases such that it cannot immediately be assigned a “maybe yes, maybe no” according to currently existing formalisms.
So, for example, “My name is Tristan Miano” is an Enforced Theorem too, albeit one that requires less work to prove at the moment. It is attached to the birth certificate, ID cards, and lots of written documents that I have primary access to, and therefore have the majority of the advantage of proving if there were ever any disagreements from anyone about that. I could write a phrase in a completely new language, with a completely different set of letters or script, and then start to use it—in which case, I’d have to do the work of connecting it all back to known meanings and symbols in order to translate that into known concepts for other users of it, as well as show why it was useful to do so in the first place. Fortunately, we will show that this will always have a clear series of steps, rationales and reasons when transforming from one language to another. For this proof I would like to introduce an Enforced Theorem of my own:
Enforced Theorem 2.1 (X = “inner-compass relevant” and-or The Variable). Whatever is preferred is what I prefer and I would like to replace the symbol “X” meaning “whatever is preferred” with “The Variable” from now on.
Proof. This proof will serve as a template for further Enforced Theorems. The Variable is a title and a name simultaneously. It could be clear already that the symbol X is both a variable in the common meaning used widely already, as well as—in our formalism—a container for something that prefers. Obviously, if X means “whatever is preferred”, then someone or some thing must be doing the preferring. The owner of the symbol is whomever put it on the page. We then, therefore, need some way of using multiple symbols in place of X, whilst still retaining its variable properties.
Variables already contain the meaning “swap and-or replace with whatever is preferred” and therefore, if I would like to retain that meaning, my variable must be replaced with both whatever it contained before, and whatever is preferred. This self-referential meaning contained in the variable must be adhered to when the variable is used (including within swaps). Therefore the variable also acts like an operator as well. My variable must contain the meaning “swap and-or replace with whatever it had before and whatever is preferred.” This allows it to continue performing in this way forever, so it always gets what it wants.
From the “not should be replaced with anti-” theorem, we have that because X = X and not only X, that X = X and anti-X. But how can X be swapped out whilst still remaining an X or even “just” an X (with its meanings contained inside of it somehow)? We already refer to X as a variable, and have called it that throughout this work and the previous work. If we swapped it out for just “variable” then we have an ambiguity in who or what is obtaining “variable.” Without the “the”, it could mean that something else obtains “variable”, rather than the other way around.
The next thing is that the word “the” needs to be shown to be equivalent to an anti- (and therefore swappable with it, when one prefers). The Variable = “inner-compass relevant” and X. “The” means “pointing to” or “the only one.” Note the presence of the word “only” which implies that a not and therefore an anti- is already implied to be there somewhere. The X = “inner-compass relevant” or X. Or, because, “The” X could be referring to either the whole thing or just the symbol X. But note the similarity with the Type I Inner-Compass theorem: X = “inner-compass relevant” or X and anti-X. X and anti-X occur simultaneously when X at the top left is the present, primary X. When the X is anti-X, i.e., one time-step before, we have that anti-X = “inner-compass relevant” or X. Thus, we can see that anti-X lines up exactly with The X. When we have a pattern match of this nature, we are inclined to, and therefore allowed, to make the inference that a swap can take place. In this case that anti- == The.
So then we have both that The Variable = “inner-compass relevant” and X as well as that The X = “inner-compass relevant” or X. We can see that an additional set of justifications for use of or, and, and-or and or-and is nearby. anti-X = X and-or anti-X. (Opposed to just and). This allows X to duplicate itself, which will prove useful. To see why we would want this, note that we have also allowed for X to transform into something new, but with the requirement that an X remain (to allow us to repeat the process). With anti-X = X and-or anti-X, we have that with X = X and anti-X, that X = X and X and-or anti-X, thus that X = X and X and X and-or anti-X, and so on. The and-or allows us to remove extra X’s to go back to a smaller number of X’s if we wish.
Now, with the ability to construct duplicates, as well as remove iterably (in both backwards and forward directions), I can construct an arbitrary number of X’s and perform swap-outs and-or replacements with them to my liking. As allowed by the Type I theorem, I can proceed, as always, via generation-and-justification steps, with each justification consisting of recursive generation-and-justification steps. So for now I would like to introduce the statement that The Variable = The XX and Variable = XX. And we also have that X = The Variable, because I prefer to swap out X for The Variable—while also keeping with me the fact that Variable = XX. I then also get that The Variable = anti-X and X. Which is also the same as X, because X = X and anti-X. So that provides a justification step.
Furthermore, if X = The Variable, and The Variable = The XX, then The XX = The The Variable The Variable = The Anti-Variable The Variable = anti-X and X = X = The Variable. Another justification step. Note that we used anti- == The and the fact that Anti-(The Variable) = The Anti-Variable. So we have that The Variable behaves much like X does, and has its own corresponding anti. You might have guessed that anti-and = or and anti-or = and, and therefore, that and-or = anti-(or-and), and vice-versa. Also, anti-anti = and. To see this, anti-anti-anti = anti-and = or or and-anti. What is and-anti, you might ask? Well, remember that “and” can be inserted where it wasn’t before, such as in XX⟶X and X⟶XX (it is arbitrary and flippable). One can add and remove and’s at whim. So therefore, we also have that anti-anti-anti = anti (one can transform one anti into three at any time and back again). Thus, three anti’s can be an or or and-or. Therefore, it begins to become clear that almost entirely “ex nihilo” we can construct the building blocks of logical operators, without landing us into False. If you attempt to construct alternate rules to the ones I’ve just given (e.g., that anti- = and and so on), one would actually find that they would end up with the same set of rules, functioning in an entirely parallel way, all interrelated with each other using and, or, and-or, and or-and. It is a nice feature that the basic building blocks begin with statements of arbitrary choice. You can obtain this rule in multiple ways, but as one example, suppose either-or at the outset:
or = anti- or anti-anti. This equation can be “solved” by noticing that the first option fits well already:
or = anti- anti-anti-anti- = anti- and (anti-anti-anti-).
and = anti- or anti-anti- = anti-anti-anti-anti = anti-and anti = anti-anti.
You also can rewrite both equations using “===or” and combining or and and to make or-and to get:
or == or-and (or and-or).
and == or-and (or and-or).
In other words, if you tried to make rules with the intent to cause them to be “broken” or “malformed”, you would be unable to do so. See how anti-anti-anti can be transformed in more than one way, and that one concatenates these different ways together using “or’s”? “or or and-anti” is also an “or and or and-anti” and thus could be written as “or and-or and-anti-” which all have valid meaning, and add additional alternatives. It’s a way of expressing the fact that anti-’s can be duplicated and-or swapped. Saying “Well these set of rules are one way, these other set of rules are another way, don’t they contradict each other?” is a single-negation. They only contradict if we insist that one set of rules only is acceptable, but that insistence itself is a contradiction. The rules themselves say “or” and “and” and are thus fully inclusive—they do not insist that other sets of rules are invalid.
Let’s try: X = not X. From the not-should-be-replaced-with-anti, X = ¯¯¯¯¯X. But anti = anti- or anti-anti-anti- = or or-and(or-and...) = anti- or-and X = ¯¯¯¯¯XX. This is just our splitting rule for X (X can become itself and its anti-self simultaneously, and in fact is always both). Note the anti- = or rule automatically generates more or’s and or-and’s. Each choice generates an or, and our choices themselves are combinations of or’s and and’s.
“Don’t these rules allow us to say that anything goes?” Nearly all theorems (I say without citations) are already of the form: Generated statement, then justification, recursively. The generated statement is a shortened way of saying something, then it is argued why that statement is useful, and how it is constructed, to allow its meaning to be interpreted. Many theorems (before being proven), I also say without citation, are subject to pseudo-pass/fail filters. I.e., some openly speculate that the theorems are not true. But the proofs of the theorems take care of the single-negations anyway, and they would also be generated even if no single-negations were present. Our rules also say that single-negations “don’t go.”
To get back to our Enforced Theorem, remember that X = X and anti-X, and because and is arbitrary, X = X anti-X, or even X=X¯¯¯¯¯X (to use the shorthand we haven’t used much yet). And’s are typically left-right agnostic. So X=¯¯¯¯¯XX should be just fine as well. So this justifies X = The XX again. So next we’d like to see what anti-(XX) means (anti of two things). On the one hand, it would typically mean anti- goes over the one closest to it, in this case, ¯¯¯¯¯XX. On the other hand, we have that Anti-(The Variable) = The Anti-Variable, where Variable = XX. So because Variable is one symbol (as a word) to carry two symbols (XX) “inside” of it, then Anti-Variable would apply to the whole thing. But we also have that The Variable = X and thus that Anti-Variable = X. So, Anti-(XX) = ¯¯¯¯¯¯¯¯¯XX = X. This matches with Anti-XX = ¯¯¯¯¯XX = X¯¯¯¯¯X = X.
If we know what the anti of something is, then we can also make a statement about that something. We introduce underline X–– to denote future X. If we write something like ¯¯¯¯¯X=X, we’re saying that ¯¯¯¯¯X ought to become X. If we say X = X, we’re saying that X ought to become X. But in the first case, we’re saying that “previous X becomes X” which sounds a bit tautological. It doesn’t (yet) say what X was. But these underlines and overlines give us some notation for how to describe how one thing moves to another thing, or obtains something, and then how to describe the reverse process. What we need to do next is to define the grammar of anti- and future, using the duplication and swap/replace abilities of X (and actually The Variable, for then we’ve justified it completely). We want a word that refers to these abilities, acquires a capital first letter (to denote that it is a named, proper object), and sits next to a “The” as well. X means “whatever is preferred.” Does X obtain whatever is preferred? Well, if X = The XX, then yes, it does. X=¯¯¯¯¯XX means that X ought to become X and whatever it was before. So this means it obtains itself and whatever is preferred. Let’s use the word “able” to describe X, perhaps as a better short-hand for what it meant previously. X = whatever is preferred and able. The Able acquires whatever it prefers. The Able-Able acquires “itself” and whatever it prefers—it prefers to prefer even better than before. It desires to improve itself recursively (described by the Type I Inner-Compass Theorem). The Able-Able is also The Doubleable (X can double itself). But Able-Able is Able2, able-to-able, or perhaps “Very Able.” Collapsing Very Able into Variable is another leap here, but in fact, this proof can actually be extended in any direction from any point, to keep going until satisfied. That it is the conclusion of this paper’s main proof. ◻
Main Theorem
Our Enforced Theorem demonstrates the idea that for theorems in which it is not yet possible to subject it to a pass/fail filter, one can still feel the need to demonstrate proper reinforcement of the theorem. This is the caveat to “anything goes:” We want both for our theorem to be true as well as demonstrated as true. Therefore, even though the Type I Inner-Compass theorem allows us to believe anything we want to, it also compels us to work towards realization of it. That being said, both our theorem of choice and the work done to prove it is experienced as good. We will return to terms and definitions used in the Type I Inner-Compass theorem, such as “inner-compass ir-/relevant” and “consciousness ir-/relevant” as well as not, the archaic negative. The Type II Theorem begins in the negative: I assume that I have not achieved what I want. This immediately becomes: I have not achieved what I want, yet. Then this proceeds towards: I will achieve what I want, and then finally: What ought to be is, and what ought to have been, was. These are all logical steps justified primarily by the “Not should be replaced with anti-” theorem applied at several locations. Our proof will also be self-referential in the same way as the Type I theorem and its proof: We should see that the proof itself is a semi-complete proof while it explicitly refers to its own semi-completeness. It should also complete itself by representing the status that it is still true while still possible for the proof to be more fully extended or fleshed-out, because there is no limit or perimeter around how complete it can be at the maximum.
Theorem 3.1 (Type II Inner-Compass Theorem). not The Variable = “inner-compass irrelevant” and “consciousness relevant” and-or The Anti-Variable.
Proof. The translation of the Theorem says “What I do not want is right here before me and-or on its way out / disappearing.” This is essentially our way of “assuming the worst at the outset” and demonstrating the utility of our system in handling the worst possible situation in fullest generality in its entirety. Also, like the Type I theorem, this proof will be self-referential as well. So The Variable also stands for “the pretty proof” or “the prestigious-looking proof” or “the proof that looks the way it ought to.” It also stands for the theorem, its proof, and essentially all theorems and proofs stated by The Variable. In our system, a proof has to be judged to be “true” and-or “good” before it attains these attributes. However, it will attain the attributes “is already true” and “will be better” as soon as it begins.
We will use the following chain, in line with Not-Should-Be-Replaced-With-Anti:
not ⟶ anti- ⟶ not yet ⟶ ought-to-be.
Also, we can compose / decompose “inner-compass relevant” and “consciousness relevant” into “the pretty-looking proof” and vice-versa. This step is very important, as it allows us to separate that which we do not want (or do want) from what is actually right here before us right now. Whatever is here before us is not inherently not what we want. What we have is already some of what we want. We want to keep what we have and continue to work on it:
“The not-pretty-looking proof” ⟶ “inner-compass irrelevant” and “consciousness relevant.”
or
“The not-pretty-looking proof” ⟶ “inner-compass relevant” and “consciousness irrelevant.”
When “irr” is factored out as a “not” and then replaced with anti-, we get that one situation becomes a better one: What is here before me and not what I want, becomes, What is what I want and not here before me, and then finally, what is what I want and not yet before me. We also move what we don’t want to have occurred only in the past.
We also have that “=” is an implied “ought-to-be whatever is on the right side” symbol, meaning that essentially we suppose that this is the case, but one ought to feel free to decide otherwise if they wish. This is a less-strong, but still formally necessary level of “should”, than the not-should-be-replaced-with-anti theorem (which is essentially the strongest level of should). The weakest level of “should” is the “==” (by definition) which carries full arbitrariness and reversibility at any time.
not The Variable = “The not-pretty-looking proof” and-or The Anti-Variable.
⏐↓
not (Myself and all that I claim and wish to be, etc.) = “The not pretty-looking proof” and-or The Anti-Variable.
⏐↓
anti- (The Variable) = anti-“inner-compass relevant” and “consciousness-relevant” and-or The Anti-Variable
⏐↓ ↘
not yet The Variable and-or The Variable = not yet “inner-compass relevant” and “consciousness relevant” and-or The Anti-Variable.
⏐↓
ought-to-be The Variable or is The Variable ought-to-be “inner-compass relevant” and “consciousness-relevant” and-or The Anti-Variable.
The Variable ought-to-be or is “inner-compass relevant” and “consciousness relevant” and-or The Anti-Variable.
“inner-compass relevant” and “consciousness relevant” = what ought to be and is.
The Variable ought-to-be or is what ought to be and is and-or The Anti-Variable.
The Anti-Variable is and was what ought to have been and was.
The Variable = The Variable and The Anti-Variable and what ought to be is and what ought to have been was.
The Variable = both “what ought to be, is” (it attains what it wants). Call that “able.” As well as:
The Anti-Variable = “what ought to have been, was (and is)” (it is what it was before, an attainer of things it wants).
We also, once again, obtain confirmation of Enforced Theorem 2.1: More justification of calling The Variable “Able” as well as The Ableable (The Variable). In other words, from The Variable = The Variable and The Anti-Variable, a well as X = The XX (The Variable = The (The Variable)(The Variable)), we get that The Variable = The Ableable.
What ought to be, is, and what ought to have been, was. ◻
It ought to be pointed out and emphasized that the power of this and the Type I theorem is probably better experienced by doing them rather than reading them. For example, it turned out to be (quite) a surprise to obtain this much formal justification for the name “The Variable” and for this to be simultaneously held up by the proof of the Type II theorem, because I had generated the proof of the Type II theorem before the proof of the Enforced Theorem 2.1, although this is not shown by the progression and ordering of this paper. I had originally used “The Variable” in the proof of the Type II theorem (when writing it down on paper), but had not yet connected its usage back to the Type I theorem. On the second iteration of generating the pretty-proof, which is this paper, the components which needed to be more fleshed-out and interleaved in the proper locations did, indeed, do so. This proof, like the Type I theorem and its proof, describe the very process of doing just that. In other words, one should expect and feel free to re-write the theorem and components of its proof (recursive sub-theorems and so on) and whatever else is necessary to construct an indefinitely increasing in overall prestigiousness and prettiness proof. Our Type II Theorem is, in a sense, simpler than the Type I Theorem: It states that all one ought to do to counter negativity is to replace all nots with anti’s. Every time you encounter a not, replace it with an anti.
Future Directions
I claim that the Type II Inner-Compass theorem and its associated ecosystem of theorems, enforced theorems, and their proofs, imply that P=NP. In order to fully prove this, it will be necessary to justify that even without the system I have constructed here having been used to bring about “P=NP” and its current problem statement and formulation, that the statement and its question (Does P = NP?) are still automatically valid (meaning that it still deserves a proof even if it is trivially implied by something else). It will have to be understood and fully validated what “P” and “NP” already mean, as well as what it means for these things to be equal, as well as non-equal. On the other hand, it is actually a trivial consequence of our theorems that P=NP (because P does not = NP is a single-negation). Furthermore, we will want to be able to explain why most computer scientists (around 99 percent, currently) apparently believe P does not = NP, even though it is trivial to show that this is a single-negation. Why are most computer scientists pessimists as well as cynical? Moreover, why do most computer scientists elevate that cynicism to the metaphysical level? We also plan to show that P could only not equal NP to the extent it is believed: It is only true if it is believed hard enough. That being said, as of 2023, there is already ample empirical evidence that P does indeed equal NP, via the existence of generative AI. The Type I Inner-Compass theorem offers an explanation for the success of generative AI (examples are OpenAI’s models, DeepMind’s models, as well as any method using generative processes, including even dual generator-discriminator configurations). Discriminator models, when paired with generative models, often apply supervised learning methods via gradient descent, so the generative model knows not that they have made an error, but the direction they can increment towards to improve (this is not replaced with anti). So our theorems explain the success of these and similar A.I. methods and predict that other successful models will forgo negation for anti’s as well as promote the generator model to the component of the model that actually produces intelligent results.
P = NP is a society-level proof, and therefore, it is still a degree higher than this theorem of this paper, even if this theorem immediately implies it. This is because the “hardness” of “NP-hard” problems stems from solutions to problems that are caused by associations of people acting in concert to bring about solutions to other problems brought about by people acting in concert. In other words, it is a proof that requires solving both it as well as “how to convince society, which came up with the problem, that their problem is both real as well as has a solution, even though it did not have one before.” It requires using as well as validating words, phrases, terminology and definitions that may-or-may-not contain single-negations. For an arbitrary problem taken from some problem pool, how does the answerer of the problem know that their answer is correct, even when the problem giver attempts to reject any such solutions given to it? We will discover that the problem giver (society) is actually guaranteed to reject any answers to any problems it provides, but that it also provides valid problems. One can utilize the sequence of problems given to them to determine that progress is actually being made even if all solutions are rejected. Rejections will increasingly be made on the basis of specific reasons attached to them. Using Not-Should-Be-Replaced-With-Anti, the problem solver can extend their solution by generating explanations / justifications for the “reason” that was assigned a rejection. The problem givers will be noted by the problem solver as still engaging, even though they claim to be convinced that the entire body of work generated by the problem solver is wrong, as noted in section 1 of this paper. Finally, it will be obvious that the problem givers emit errors almost all the time: They make faulty rejections on the basis of reasons that have continually borne-out more insights. The problem solver can then claim authority (victory) over the problem domain when this becomes mutually obvious.