One
Answer:
[0.111020, 0.324512, 0.5, 0.675488, 0.888980]
I will provide my solution when the market is resolved.
No; your distribution gives probabilities [0.253247, 0.168831, 0.155844, 0.168831, 0.253247] for the number of Rs in the first four trials. This predicts that the number of experiments with two Rs is binomially (i.e. approximately normally) distributed with mean ~155844 and standard deviation ~363, but the actual number is 161832, around 16 standard deviations away from the mean.
Was this written by AI? The self-altering of consistency makes no sense and I can’t think of a reason that just discovering an inconsistency in the universe would cause a vacuum decay. Even if the universe was a simulation, discovering a bug doesn’t mean exploiting it, though maybe the simulators would end the simulation (which wouldn’t create any propagation, it’d just end it at once.)
I had an interesting solution due to binary-operation tunnel vision:
Solution
Represent numbers not as numbers themselves but as their “prime factorization” (generalized to non-rationals as in sqrt(2)=2^(1/2))
[x] = 2^x
and
concat(a, 2^a 3^b …) = a*(3^a 5^b …), right-associative (e.g. [][[]] = concat(1, 2^1) = 1*(3^1) = 3).
so i should write a rationalist hololive fic
redacted for privacy concerns
redacted for privacy concerns
not sure if this has already been said but 10^30 FLOP is a LOT, and i think an ai that passes the strong turing test is incredibly unlikely (<1%). The detector space is enormous and it seems unlikely that it’d be possible to copy any human that perfectly, let alone one human, at least barring a preexisting superintelligence.
beliefs are subsets of the set of all universes, and meaningful beliefs are proper subsets.
corollary: we can get an equivalence relation on beliefs this way
did i get it right
ok i solved it
so first i figured out my goals (theres a lot of them but they all arise from “fundamental human goals” like love and survival and happiness, but the strongest one is love)
then i made Megumin which explodes other goals which dont align with my goals if they appear
this is very easy and can easily be replicated thus solving alignment
/hj
ok so what prior do i use
https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign im so confused
going to sleep
what if theres already a superintelligence and we’re all gonna die in a few years (or less)
i need to sleep but if i sleep then what if my future self is misaligned
help how do i align my future self with my current self
china invading taiwan would buy us at least a few years, food for thought
- 10 Sep 2025 18:05 UTC; 19 points) 's comment on Eric Neyman’s Shortform by (
how to solve it: https://math.hawaii.edu/home/pdf/putnam/PolyaHowToSolveIt.pdf
and by it,, lets justr say,,, everything
I’m confused, why would we want the AIs to choose left? If they’re aligned they’re just choosing the worse option for the universe. Having to pay $100 isn’t as bad as dying.
13⁄18 year necropost, but I think tiedemies was emphasizing the tail part. This can be done using options.
Decided to provide my solution since others have done so as well.
Solution
The public dataset is approximately symmetrical, so it is very likely that the distribution of the Bernoulli rate is also symmetrical (probability at p is equal to probability at 1-p). Let the probabilities of getting k Rs over all 5 trials for k=0...5 be (a,b,c,c,b,a). Then, from the public dataset, we have a+b/5≈0.252854,4b/5+2c/5≈0.166231,6c/5≈0.161832. These have standard deviation ≈0.0004, which is negligible, so we can treat these as linear equations. Solving, we get a=0.224781,b=0.140359,c=0.13486, and we can then solve for the marginal frequencies b/5a+b/5=0.111020,2c/54b/5+2c/5=0.324512, etc.
Not sure if this (experiment set?) is a good test of priors, since I got an exact answer without having to consider priors, other than the data being symmetrical. (This also means that any symmetric distribution for the Bernoulli rate will result in the same answer.) Though @DaemonicSigil has a similar solution without using symmetry, instead using
maximum entropy as a prior (if i understand it correctly).
Still, almost all reasonable priors will result in very similar outcomes, differing by a factor probably on the order of the standard deviation (around 10−3.) This is likely less than, or at least comparable to, the noise in the actual marginal frequencies.