I think you are correct that I cannot cleanly separate the things that are in my past that I know and the things that are in my post that I do not know. For example, if a probability is chosen uniformly at random in the unit interval, then a coin with that probability is flipped a large number of times, then I see some of the results, I do not know the true probability, but the coin flips that I see really should come after the thing that determines the probability in my Bayes’ net.
The uniqueness of 0 is only roughly equivalent to the half plane definition if you also assume convexity (I.e. the existence of independent coins of no value.)
I added the word unit.
I think these titles should have dates instead of or in addition to numbers for historical context.
I think this is similar to Security Mindset, so you might want to think about this post in relation to that.
Ok, I have two other things to submit:
Counterfactual Mugging Poker Game and Optimization Amplifies.
I hope that your decision procedure includes a part where if I win, you choose whichever subset of my posts you most want to draw attention to. I think that a single post would get a larger signal boost than each post in a group of three, and would not be offended of one or two of my posts gets cut from the announcement post to increase the signal for other things.
No, sorry. It wouldn’t be very readable, and it is easy to do yourself.
I am actually worried that because I posted it, people will think it is more relevant to AI safety than it really is. I think it is a little related, but not strongly.
I do think it is surprising and interesting. I think it is useful for thinking about civilization and civilizational collapse and what aliens (or maybe AI or optimization daemons) might look like. My inner Andrew Critch also thinks it is more directly related to AI safety than I do. Also if I thought multipolar scenarios were more likely, I might think it is more relevant.
Also it is made out of pieces such that thinking about it was a useful exercise. I am thinking a lot about Nash equilibria and dynamics. I think the fact that Nash equilibria are not exactly a dynamic type of object and are not easy to find is very relevant to understanding embedded agency. Also, I think that modal combat is relevant, because I think that Lobian handshakes are pointing at an important part of reasoning about oneself.
I think it is relevant enough that it was worth doing, and such that I would be happy if someone expanded on it, but I am not planning on thinking about it much more because it does feel only tangentially related.
That being said, many times I have explicitly thought that I was thinking about a thing that was not really related to the bigger problems I wanted to be working on, only to later see a stronger connection.
That was wrong. Fixed it. Thanks.
I think the comments here point out just how much we do not have common knowledge about this thing that we are pretending we have common knowledge about.
The FLI Beneficial AI workshop and the CHAI annual workshops have both been under the Chatham House Rule, for example. I don’t know about outside of AI safety.
I agree that most people do not expect the rules to be treated as sacred. I still want the rules to be such that someone could (without great cost) treat them as sacred if they wanted to.
That or it should be explicitly stated that you are only expected to loosely follow the spirit of the rule.
There is this: https://github.com/machine-intelligence/provability