Can you by chance pin down your disagreement to a particular axiom? You’re modus tollensing where I expected you would modus ponens.

# solipsist

I didn’t follow everything, but does this attempt to address self-fulfilling prophecies? Assume the oracle has good track record and releases its information publicly. If I ask it “What are the chances Russia and the US will engage in nuclear war in the next 6 months?”, answers of “0.001″ and “0.8” are probably both accurate.

What what sorts of output strings are you missing?

Calculating Kolmogorov complexities is hard because because it is hard differentiate between programs that run a long time

*and halt*and programs that run a long time*and never halt*.If God gave you a 1.01 MB text file and told you “This program computes BB(1000000)”, then you could easily write a program to find the Kolmogorov complexity of any string less then 1 MB.

`kolmogorov_map = defaultdict(lambda x : infinity) for all strings *p* less than 1000000: run *p* for at most BB(1000000) steps save output to *o* if (*p* halted and kolmogorov_map[*o*] > len(p): kolmogorov_map[*o*] = len(p) # found smaller program else: # *p* does not ever ever halt and has no output`

Replace BB(1000000) with a smaller number, say

*A(Graham’s number, Graham’s number)*, and this calculator works for all programs which halt in less than*A(Graham’s number, Graham’s number)*steps. That includes pretty much every program I care about! For instance, it includes every program which could run under known physics in the age of the universe.

Eh, don’t take it personally. I’m guessing commenters are implicitly taking the title question as a challenge and are pouncing to poke holes in your argument. I thought your essay was well written and thought provoking. Keep posting!

Don’t know, not the original author. What do you think the chances are than an email on the third page of your inbox will ever get a reply? Inbox purgatory seems to me like a way to give up on something without having to admit it yourself.

If my inbox has more than 40 or 50 items in it I feel demoralized and find it harder to work through newer items, so the easiest way for me to stay at steady-state is to keep my inbox at zero or close to it.

Counterpoint: I’ve kept to an empty inbox for many years, but know people with ever-growing inboxes whom I consider more organized and responsive. I’ve never declared email bankruptcy during my professional life and don’t know the consequences.

And nothing in here says anything about how to deal with that situation.

I read the advice as:

If you still have unresolved emails from 2015 in your inbox then keeping emails in your inbox isn’t causing them to get resolved. Accept that, get a clean slate, and move on.

Make a folder called “old inbox” and put all your old emails there. Now you have an empty inbox!

*The costs of putting your old emails out of sight are less than the benefits of keeping an empty inbox going forward*.

HLS students of any skin color have high IQs as measured by standardized tests. The school’s 25th percentile LSAT score is 170, which is 97.5th percentile for the subset of college graduates who take the LSAT. 44% of HLS students are people of color.

The book to read is

*Reasons and Persons*by Derek Parfit.

If love your simulation as you love yourself, they will love you as they love themselves (and if you don’t, they won’t). You can choose to have enemies or allies with your own actions.

You and a thousand simulations of you play a game where pressing a button gives the presser $500 but takes $1 from each of the other players. Do you press the button?

What do you mean by “commit suicide” here? Memorize the results of 5 more coins?

Spit balling hacks around this:

**Weigh hypotheses based on how many steps it takes for them to be computed on a dovetailing of all Turing machines**. This would probably put too much weight on programs that are large but fast to compute.**Weigh hypotheses on how much**. So dovetail all turing machines of size up to*space*it takes to compute them*n*limited to*n*bits of space for at most*2^n*steps. This has the nice property that the prior is, like the hypotheses, space limited (using about twice as much space as the hypothesis).**Find some quantum algorithm that uses n^k qubits and polynomial time to magically evaluate all programs of length of**. If such a beast exists (which I doubt), it has the nice property that it “considers” all reasonably sized hypotheses yet runs in polynomial space and time.*n*in some semi-reasonable way**Given**. This has the nice property that it runs in polynomial time and space.*n*bits of evidence about the universe, consider all programs of length up to k*log(n) run for at most n^k2 steps

I also found the answer to a question I’ve been researching for ~3 years.

Boy, did you ever! Congratulations!

I’m not sure if coin flips are quantumly random, or just hard enough to predict. Feels like coins would still work as well in a Newtonian universe. I tried to go with something that something that is

*clearly*caused by quantum effects, like measuring if electron is either polarized up or down or down. Luckily, there’s an app for that.

I set up an experiment to test quantum anthropics.

Flip four quantum coins. If they all came up heads, stop. If any of them came up tails, flip 5 more coins and (using mnemonics) think

*really*hard about the exact coin flip sequence. If I find myself in a universe where first four coins came up all heads, then with p < 0.0625, quantum weirdness kept me from finding myself in one of the universes the state of my consciousness split me 512-ways.I got access to a quantum random number generator, resolved to do the experiment, called a friend and told them I was about to do the experiment, and… chickened out and didn’t do the experiment.

I do not know how to interpret these results :-/

Minor naming feedback. You switched from calling something “supervised learning” to “reinforcement learning”. The first images that come to my mind when I hear “reinforcement learning” are TD-Gammon and reward signals. So, when I read “reinforcement learning”, I first think of a computer getting smarter through iterative navel-gazing, then think of a computer trying to wirehead itself, then stumble to the meaning I think you intend. I am a lay reader.

Other answers I’ve considered:

o) Simpler universes are more likely, but complicated universes vastly outnumber simple ones. It’s rare to be at the mode, even though the mode is the most common place to be.

p) Beings in simple universes don’t ask this question because their universe is simple. We are asking this question, therefore we are not in a simple universe.

2′) You don’t spend time pondering questions you can quickly answer. If you discover yourself thinking about a philosophy problem, you should expect to be on the stupider end of entities capable of thinking about that problem.

Oh! So you’re saying the spectrum of the acoustic noise at a given temperature will be the spectrum of black body radiation! Yes, I could definitely believe that. That is high-frequency indeed.

Essentially, an air molecule doesn’t have enough energy to register at your hearing sensors, that is, to move your eardrum (or cochlear hairs).

Though, now that I’m thinking about it, if the white noise generator I bought to help me sleep is

*really*good at producing white noise with uniform power at high enough frequencies, an air molecule*would*have enough energy to move my eardrums. I would also be on fire.And if my white noise generator is really

*really*good at producing white noise with power uniform across all frequencies, the noise’s mass-energy will cause my bedroom to collapse into a black hole and I will be unable to leave a 5 star review on Amazon.

Do you happen know a back-of-the-envelope way to get that 30 THz figure?

Which (possibly all) of the VNM axioms do you think are not appropriate as part of a formulation of rational behavior?

I think the Peano natural numbers is a reasonable model for the number of steins I own (with the possible exception that if my steins fill up the universe a successor number of steins might not exist). But I don’t think the Peano axioms are a good model for how much beer I drink. It is not the case that all quantities of beer can be expressed as successors to 0 beer, so beer does not follow the axiom of induction.

I think ZFC axioms are a poor model of impressionist paintings. For example, it is not the case that for every impressionist paintings

xandy, there exists an impressionist painting that contains bothxandy. Therefore impressionist paintings violate the axiom of pairing.