Yes, I have filled in all the blanks, which is why I wrote “fully resolves the probability space”. I didn’t bother to list every combination of conditional probabilities in my comment, because they’re all trivially obvious. P(awake on Monday) = 1, P(awake on Tuesday) = 1⁄2, which is both obvious and directly related to the similarly named subjective credences of which day Beauty thinks it is at a time of awakening.
By the way, I’m not saying that credences are not probabilities. They obey probability space axioms, at least in rational principle. I’m saying that there are two different probability spaces here, that it is necessary to distinguish them, and the problem makes statements about one (calling them probabilities) and asks about Beauty’s beliefs (credences) so I just carried that terminology and related symbols through. Call them P_O and P_L for objective and local spaces, if you prefer.
For example, it didn’t get “In a room I have only 3 sisters. Anna is reading a book. Alice is playing a match of chess. What the third sister, Amanda, is, doing ?”
I didn’t get this one either. When visualizing the problem, Alice was playing chess online, since in my experience this is how almost all games of chess are played. I tried to look for some sort of wordplay for the alliterative sister names or the strange grammar errors at the end of the question, but didn’t get anywhere.
From my point of view, the problem statement mixes probabilities with credences. The coin flip is stated in terms of equal unconditional probability of outcomes. An awakening on Monday is not probabilistic in the same sense: it always happens. Sleeping Beauty has some nonzero subjective credence upon awakening that it is Monday now.
Let’s use P(event) to represent the probabilities, and C(event) to represent subjective credences.
We know that P(Heads) = P(Tails) = 1⁄2. We also know that P(awake on Monday | Heads) = P(awake on Monday | Tails) = P(awake on Tuesday | Tails) = 1, and P(awake on Tuesday | Heads) = 0. That fully resolves the probability space. Note that “awake on Tuesday” is not disjoint from “awake on Monday”.
The credence problem is underdetermined, in the same way that Bertrand’s Paradox is underdetermined. In the absence of a fully specified problem we have symmetry principles to choose from to fill in the blanks, and they are mutually inconsistent. If we specify the problem further such as by operationalizing a specific bet that Beauty must make on awakening, then this resolves the issue entirely.
Current science is based on models that are from any agent point of view non-deterministic, and philosophy doesn’t make any predictions.
That aside, there is still a strong sense in which determinism is in practice useless, even if the universe truly was deterministic. You don’t know the initial conditions, you don’t know the true evolution rules, and you don’t know and can’t predict the future outcomes any better than alternative models in which you don’t assume that the universe is deterministic.
For example: standard quantum mechanics (depending upon interpretation) says either that the universe is deterministic and “you” is actually a mixture of infinitely many different states, or non-deterministic, and it makes no difference which way you interpret it. There is no experiment you can conduct to find out which one is “actually true”. They’re 100% mathematically equivalent.
However: neither option tells you that there is only one predetermined outcome that will happen. They just differ on whether the incorrect part is “only one” or “predetermined”. Neither are philosophical determinism, even if one interpretation yields a technical sort of state space determinism when viewed over the whole universe.
However, I can’t offer much help with the actual problem in your post—that you need to avoid thinking about determinism to avoid being demotivated. I just don’t feel that, and can’t really imagine how thinking about determinism would make a difference. If I consider doing something, and have the thought “a superintelligent being with perfect knowledge of the universe could have predicted that I would do that”, it has no emotional effect on me that I can discern, and doesn’t affect which things I choose to do or not do.
It’s definitely not the case in my social circles either. When suicide or attempted suicide has happened within groups of people I know, the reaction has mostly been similar to something terrible having happened to that person, like cancer or a traffic accident. Anger mostly temporary, except sometimes in those closest. Confusion often, like “why did they do it?”, but not “what a horrible person for having done it” as I see for killers or attempted killers.
Every single word you might ever use about something in the real world is in the map, not the territory. It is semantically empty to point this out for three specific words.
It doesn’t directly strengthen the lab leak theory: P(emergence at Wuhan & caves distant | leak) is pretty similar to P(emergence at Wuhan & caves nearby | leak).
It does greatly weaken the natural origin theory: P(emergence at Wuhan & caves distant | natural) << P(emergence at Wuhan & caves nearby | natural).
If those are the only credible alternatives, then it greatly increases the posterior odds of the lab leak hypothesis.
Decoherence (or any other interpretation of QM) will definitely lead to a pretty uniform distribution over this sort of time scale. Just as in the classical case, the underlying dynamics is extremely unstable within the bounds of conservation laws, with the additional problem that the final state for any given perturbation is a distribution instead of a single measurement.
If there is any actual asymmetry in the setup (e.g. one side of the box was 0.001 K warmer than the other, or the volumes of each side were 10^-9 m^3 different), you will probably get a very lopsided distribution for an observation of which side has more molecules regardless of initial perturbation.
If the setup is actually perfectly symmetric though (which seems fitting with the other idealizations in the scenario), the resulting distribution of outcomes will be 50:50, essentially independent of the initial state within the parameters given.
This seems to be focussing on one specific means by which quantum randomness might affect a result.
Another means may be via personal health of a candidate. For example, everyone has pre-cancerous cells that just need the right trigger to form a malignancy, especially in the older people that tend to be candidates in US presidential elections, or for an undetected existing cancer to progress to become serious.
Is there comparable with 0.1% chance that due to a cosmic ray or any other event, that a candidate will have something happen that is serious enough that it affects their ability to run for the 2028 election? It seems likely that the result of an election depends at least moderately strongly upon who is running.
Do you have greater than 99.9% confidence that it will not be close?
Is this supposed to involve quantum physics, or just some purely classical toy model?
In a quantum physics model, the probability of observing more atoms on one side than the other will be indistinguishable from 50% (assuming that your box is divided exactly in half and all other things are symmetric etc). The initial perturbation will make no difference to this.
Sure, I’ll play Game #2. I’ll even play it repeatedly, with a stopping criterion based on my utility of money.
If my utility of money is approximately linear for at least an order of magnitude or so beyond $100 (which it is), the most likely outcome is that I lose all of my initial $100 stake according to a biased random walk on log(pot). However, the game would be very much net positive in expected utility since an exit via the stopping condition wins so much more. It’s a lottery biased in my favour, where I get to choose the odds of winning to some extent, but the payout increases better than linear in those odds.
I would prefer to repeatedly play Game #1, or better yet a Game #1.5 where I get to choose how much to bet, but if Game #2 was the only option then I’d take it.
It would certainly be interesting if you did, and would probably promote competition a lot more than is currently the case. On the other hand measuring and attributing those effects would be extremely difficult and almost certainly easy to game.
Here is a maze. Don’t solve it yet
Oops! Too late! Way, way too late.
It was a big gaudy thing that caught my eye as soon as it scrolled into view, was obviously a maze, and mazes are designed to be solved. It took less than a second to solve, which was less time than it took to reorient my attention to the point in the text I was reading, and about 20 seconds before I reached the text “Don’t solve it yet” in my reading.
Maybe a spoiler cover with a more prominent “Here is a maze. Don’t solve it yet” above would have helped?
One butterfly flapping its wings just right instead of the way it actually did.
I’m not sure about you, but I am pretty much already maxed out on the amount of programming I can usefully do per day. It is already rather less than my nominal working hours.
I do agree that a lot more flexibility in working arrangements would be a good thing, but it seems difficult to arrange such a society in (let’s say) the presence of misaligned agents and other detriments to beneficial coordination.
The game Eco has the option to animate your avatar via webcam. Although I do own and play the game occasionally, I have no idea how good this feature is as I do not have a webcam.
I strong downvote any post in which the title is significantly more clickbaity than warranted by the evidence in the post. Including this one.
As long as you can reasonably represent “do not kill everyone”, you can make this a goal of the AI
Even if you can reasonably represent “do not kill everyone”, what makes you believe that you definitely can make this a goal of the AI?
In the end, we’d expect land to increase in value because people who need land get more productive and can therefore afford to bid it higher, but if the value of human labor drops, shouldn’t we expect the value of land to drop too?
Will the value of human labour drop? The immense scale of economic disruptions in a future where there is widespread AGI makes the question itself hard to interpret. There are many theories of value that are relatively similar in modern economies but come apart in very different ones such as these hypothetical futures.
Regardless of that, I think that returns on any relevant capital investments would skyrocket for a while, followed by pretty major rises in rates of return on investment in whatever the new limiting factors turn out to be—very likely some forms of economic rents.
In the end, you presumably don’t much care about returns directly in monetary terms, but in what it gets you in the new civilization. Goods, services, influence, security—whatever. I think land as a general rule is pretty unlikely to get less of these than it does now, and in some cases likely gets very much more—though as with any investment there will be risks.