Seems like you are making an important point, but I am not sure I get it. Mind clarifying?

# shminux

Yes, Hilbert formulated the equations (or at least the Hilbert action from which the Einstein field equations follow) at about the same time, a brilliant mathematician that he was, he only needed a few hints and he was familiar with Riemann’s differential geometry. The idea that differential geometry could be useful for the description of gravity as a field had been known since at least 1913, after Grossmann, Einstein’s classmate with whom Einstein had been collaborating on and off for a few years prior, since maybe 1907, published his paper on the topic. I don’t know the full history, but I was under the impression that Einstein was the main driving force behind trying to come up with incorporating Lorentz invariance into a new theory of gravity.

General relativity is an obvious candidate. While special relativity was hanging in the air, and so was quantum mechanics, there was no urgency to improve on the Newtonian gravity at the time. There were a few small discrepancies, like the perihelion of Mercury, but not until the discovery of expanding universe a decade later it was obvious that a new theory was needed.

Just what exactly is out there generating inputs to my senses, and by what mechanism does it remain in sync with everyone else (approximately)?

Sometimes the “out there” can be modeled as a shared reality, sure. The key word is “modeled”. Sometimes this model is not a good one. If you insist on privileging one model over all others to be the true objective external reality valid everywhere, you pay the price where it fails. Like in the OP’s case.

I can see your point, and it’s the one most people implicitly accept. Observations are predictable, therefore there is a shared reality out there generating those observations. It works most of the time. But in the edge cases (or “extremely fine details”) this implicit assumption breaks down. Like in the case of “objective mathematical facts waiting to be discovered”, such as the 98,765th of π before you measure it. So why insist on applying this assumption outside of its realm of applicability? Isn’t it sort of like insisting that if you shoot a bullet from a ship moving with nearly the speed of light, it will travel faster than light?

So if there is no “objective” reality, apart from that which we experience, then why is it that we all seem to experience the same reality?

I am not saying that there is no objective reality, just that I am agnostic about it. In the example you describe, it is a useful meta-model, though not all the time. You may notice that, despite a video review and slow motion hi-res cameras, fans of different teams still argue about what happened, and the final decision is in the hands of a referee. You and your partner (especially ex partner) may disagree about “what really happened” and there is often no way to tell “who is right”. One instead has to accept that what one person experienced is not necessarily what another did, and, at least instrumentally, arguing about whose reality is the “true” is likely to be not useful at all. One may as well accept the model where somewhat different things happened to different actors.

In the absence of an external reality, why is it that everyone’s model of the world appears to be in such concordance with everyone else’s?

Does it? Who won the World War II, Americans, British or Russians? Is Trump a hero or a villain? Did Elon Musk disclose material information or not in his tweets? Do mathematical infinities exist? Are the laws of physics invented or discovered? Was Jesus a son of God? The list of disagreements about “objective reality” is endless. Sure, there is some “concordance” between different people’s views of the world, but it is much less strong than one naively assumes.

Doesn’t that indicate that there is some kind of objective reality, to which our mathematics corresponds?

A reality behind repeatable observations is a good model, as long as it works. My point is that it doesn’t always work, like in the confusion about logical uncertainty.

And I disagree with the assumptions behind the Wigner’s question, “why does our math work so well at predicting the future?”, specifically that math’s effectiveness is “unreasonable”. Human and animal brains do complicated calculations all the time in real time to get through life, like solving what amounts to non-linear partial differential equations to even get a bite of food into your mouth. Just because it is subconscious, it is no less of a math than proving theorems. What most humans mean by math is constructing conscious, not subconscious meta-models and using them in multiple contexts. But we subconscious meta-modeling like this all the time in other areas of human experience, so my answer to Wigner’s question is “you are committing a mind projection fallacy, the apparently unreasonable effectiveness of mathematics is a statement about human mind, not about the world”.

If there is no such thing as non-experienced mathematical truths, then why does everyone’s experience of mathematical truths seem to be the same?

In general, however, your questions about the intuitionist approach to math is best directed to professional mathematicians who are actually intuitionists, though.

You seem to be conflating two different questions:

*What is your best estimate of probability of the currently unknown to you 98,765th digit of π coming out zero, once someone calculates it?*and

*What is your best estimate of probability of the 98,765th digit of π calculated by two different people being different?*Once enough people reliably do the same calculation (or if there is another reliable way to perform the observation of the 98,765th digit of π), then it can be added to the list of performed observations and, if needed used to predict future observations.

just what exactly is it that makes the 98,765th of π be the same thing when calculated by me, or by Hasan, or by anyone else? Whatever that thing is, what is wrong with calling it “a fact of the matter about what the 98,765th digit of π is”

This goes back to realism vs anti-realism, not anything I had invented. Anti-realism is a self-consistent epistemology, it pops up in many areas independently. According to Wikipedia, in science an example of it in science is instrumentalism, and in math it is intuitionism: “there are no non-experienced mathematical truths”.

There is no difference between logical uncertainty and environmental uncertainty in anti-realism. OP seems to have reinvented the juxtaposition of realism and anti-realism in the setting of the probability theory, calling it “perfect Bayesianism” and “subjective Bayesianism” respectively. And “perfect Bayesianism” runs into trouble with logical vs environmental uncertainties, because of the extra (and unnecessary, in the anti-realist view) postulate of objective reality.

Surely, there must be some fact of the matter about what the ratio of a circle’s circumference to its diameter is?

This is exactly the issue at hand. You believe in external mathematical “facts”, ideal platonic objects. The mathematical territory. This is a useful belief at times, but not in this case, as it gets in the way of making otherwise obvious predictions about observations, such as “how likely that a randomly picked digit of π is zero, once it is picked, but not yet calculated?”

A perfect Bayesian reasoner that knows the rules of logic and the definition of π must, by the axioms of probability theory, assign probability either 0 or 1 to the claim “the 98,765th digit of π is a 0” (depending on whether or not it is). This is one of the reasons why perfect Bayesian reasoning is intractable. A subjectivist that is not a perfect Bayesian nevertheless claims that they are personally uncertain about the value of the 98,765th digit of π.

The term “perfect Bayesian” sounds misleading, there is nothing perfect about one’s inability to make good probability estimates. This is like saying a “perfect two-boxer”.

On a related note, what you call the open problem of logical uncertainty is one of the cases where postulating an objective reality (in this case, a mathematical reality), also known on this site as “the territory” runs into limitations. Once you stop insisting that any yet unmeasured value or an unproven theorem is either true or false (or undecidable), but go with the more intuitionist approach, the made-up contradiction between “but there is a 98,765th digit of π out there that has a definite value” and “before calculating the 8,765th digit of π (in effect, making an observation) the best model of π predicts equal probability of all digits” dissolves.

I guess I am an explicitly subjective frequentist. My interpretation of probability is that of a frequency of subjectively similar observations, without any attempt to claim their correspondence to any potential external reality.

The situation is pretty clear: the doctor wants more money, your friend does not want to give him more money for an unnecessary visit, but is also not willing to sacrifice his time or his money or both to knuckle under and get the life-saving medicine that should not even require a prescription renewal in a reasonable healthcare setup.

This was a contest of wills, your friend found some blackmailing ammo (a threat to post on social media), and he won the game of chicken. If the doctor had the nerve and refused, your friend would have somehow found the time to come in the next day, to this doctor or some other, and, depending on his mood that day, made good on his threat or not. We don’t know.

One thing is clear though: it was not about life and death, it was about time and money.

the coin is already heads or tails, no matter that I don’t know which it is

it’s worse than that. All you know that the coin has landed. You need further observations to learn more. Maybe it will slip from your hand and fall on the ground. Maybe you will be distracted with reading LW and forget to check. Maybe you don’t remember which side to check, the wrist or the hand side. You can insist that the coin has already landed and therefore it has landed either heads or tails, but that is not a useful supposition until you actually look. Think just a little way back: the coin is about to land, but not quite yet. Is it the same as the coin has landed? Almost, but not quite. what about a little ways further back? The uncertainty about the outcome is even more. So, there is nothing special about the landed coin until you actually look, beyond a certain level of probabilities. A pragmatic approach (I refuse to wade into the ideological debate between militant frequentists and militant Bayesians) would be to use all available information to make the best prediction possible, depending on the question asked.

“The pilots, who had previously flown Russian-designed planes that had audible warning signals, apparently failed to notice it.”

Unlike Russian planes, apparently designed for dummies, the Airbus designers assumed that the pilots would be adequately trained. And sober. Which is not an unreasonable assumption.

Not right at all. The original and the modified Newcomb’s problems are disguised as decision theory problems. Your formulation takes the illusion of decision making out of it.

If you believe that you have the power to make decisions, then the problems are not “functionally equivalent”. If you don’t believe that you have the power to make decisions, then there is no problem or paradox, just a set of observations. you can’t have it both ways. Either you live in the world where agents and decisions are possible, or you do not. You have to pick one of the two assumptions, since they are mutually exclusive.

I have talked about a self-consistent way to present both in my old post.

So the situation is as bad as it could possibly be.

You mean, it is as bad as it could possibly be for the Nash equilibrium to be a good strategy and a good predictor in this setup? Yep, absolutely. All models tend to have their domain of validity, and this game shows the limits of the Nash equilibrium model of decision making.

What is X?

My question is usually “what information do you want to convey to others by using the term X?”

Both. The semi-mythical Cassandra is a case in point: people (and gods) hated her and she hated being a prophet, just couldn’t do much about it. No one likes a bearer of bad news, and most prophesies are bad news. But being a prophet and being a leader are different jobs, not sure why the OP conflates them.

I have posted some time ago about the Boeing 737 Max 8 MCAS system as an example of incorrigibility.

I wonder if there are other mammals like that, and if not, what would explain this version of the Fermi paradox.