Taking a Kolmogorov prior and Bayesian updating on new sense data.
Believing logic works has nothing to do with faith—it’s that you cannot do anything useful with the alternative. Then, once you’ve assumed logic and created maths, you just find the simplest explanations that fit with what you see. Will the future always be what you expect? No, but you can make claims with very high confidence, e.g. “in 99.99% of worlds where I receive the sense data I did, the Sun will actually rise tomorow.”
Just because you panic about the unknown does not mean the unknown will actually be a large factor in your reality.
Just because you panic about the unknown does not mean the unknown will actually be a large factor in your reality.
I do understand the point you are trying to make, but a large part of speculation around AI on this forum, especially around acausal trade, the simulation hypothesis etc. basically lives outside of the bounds of the two axioms you have set up. Especially if you start talking about whole brain emulation and the possibility of living in a simulation, you are no longer making educated inferences based on logic and sense data: Once you posit that all the sense data you have received in your life can be fabricated, you open yourself up to an endless pit of new and unfalsifiable arguments about what exactly is “out there”.
In fact, a lot of simulation hypothesis related arguments have to smuggle assumptions about how the universe works out of the matrix, assuming that any base universe simulating us must have similar laws around thermodynamics, matter, physics etc., which is of course not a given at all. We could be simulations running in a Conway’s Game of Life universe, after all.
And you can say, “well we must believe in this because the alternative is of no use to us and would be completely unworkable by the lights of my worldview”, in which case you have just made a statement of faith sans evidence either for or against. You choose to believe in a universe where your systems of thinking have purpose and utility, which is basically the point I’m trying to make.
You do know truth only means, “consistent with some set of assumptions (axioms)”? What does it mean to look for “true axioms”? That’s why I defer to useful ones.
You do know truth only means, “consistent with some set of assumptions (axioms)”?
There’s no agreed definition of truth , so I am not in a position to know that. The definition you have is kind of how logic works, but many would argue that logic doesn’t generate truth for that reason.
Words demarcate the boundaries of meanings. You seem to be claiming there is some undefinable quality to the word “truth” that is useful to us, i.e. some unmeaningful meaning. Believe in epehemeral qualities all you like, but don’t criticize me for missing out on some “truths” that are impossible to discover anyway.
The “guarantor” of your future is two things:
A belief that logic works.
Taking a Kolmogorov prior and Bayesian updating on new sense data.
Believing logic works has nothing to do with faith—it’s that you cannot do anything useful with the alternative. Then, once you’ve assumed logic and created maths, you just find the simplest explanations that fit with what you see. Will the future always be what you expect? No, but you can make claims with very high confidence, e.g. “in 99.99% of worlds where I receive the sense data I did, the Sun will actually rise tomorow.”
Just because you panic about the unknown does not mean the unknown will actually be a large factor in your reality.
I do understand the point you are trying to make, but a large part of speculation around AI on this forum, especially around acausal trade, the simulation hypothesis etc. basically lives outside of the bounds of the two axioms you have set up. Especially if you start talking about whole brain emulation and the possibility of living in a simulation, you are no longer making educated inferences based on logic and sense data: Once you posit that all the sense data you have received in your life can be fabricated, you open yourself up to an endless pit of new and unfalsifiable arguments about what exactly is “out there”.
In fact, a lot of simulation hypothesis related arguments have to smuggle assumptions about how the universe works out of the matrix, assuming that any base universe simulating us must have similar laws around thermodynamics, matter, physics etc., which is of course not a given at all. We could be simulations running in a Conway’s Game of Life universe, after all.
And you can say, “well we must believe in this because the alternative is of no use to us and would be completely unworkable by the lights of my worldview”, in which case you have just made a statement of faith sans evidence either for or against. You choose to believe in a universe where your systems of thinking have purpose and utility, which is basically the point I’m trying to make.
I would say that I focus my thinking on the universes I can get sensory input showing the thinking is useful.
Re: this thread
If the best you can come up with is a probabilistic argument, that concedes the point that there is no guarantee. Well, you put “guarantor” in quotes.
Settling for usefulness alone rather than usefulness+truth is also a concession.
You do know truth only means, “consistent with some set of assumptions (axioms)”? What does it mean to look for “true axioms”? That’s why I defer to useful ones.
There’s no agreed definition of truth , so I am not in a position to know that. The definition you have is kind of how logic works, but many would argue that logic doesn’t generate truth for that reason.
A lot of people would use empiricism for that.
Words demarcate the boundaries of meanings. You seem to be claiming there is some undefinable quality to the word “truth” that is useful to us, i.e. some unmeaningful meaning. Believe in epehemeral qualities all you like, but don’t criticize me for missing out on some “truths” that are impossible to discover anyway.
No, it’s just a typical contested philosophical term , with multiple definitions/theories.
Then why is it too difficult for you to write down one of those definitions or theories where your criticism makes any sense?
Probabilistic truth is a concession relative to … wait for it …. certain truth!
(The OP is pretty much a complaint about lack of certainty, although not phrased that way).