I am actually worried that because I posted it, people will think it is more relevant to AI safety than it really is. I think it is a little related, but not strongly.
I do think it is surprising and interesting. I think it is useful for thinking about civilization and civilizational collapse and what aliens (or maybe AI or optimization daemons) might look like. My inner Andrew Critch also thinks it is more directly related to AI safety than I do. Also if I thought multipolar scenarios were more likely, I might think it is more relevant.
Also it is made out of pieces such that thinking about it was a useful exercise. I am thinking a lot about Nash equilibria and dynamics. I think the fact that Nash equilibria are not exactly a dynamic type of object and are not easy to find is very relevant to understanding embedded agency. Also, I think that modal combat is relevant, because I think that Lobian handshakes are pointing at an important part of reasoning about oneself.
I think it is relevant enough that it was worth doing, and such that I would be happy if someone expanded on it, but I am not planning on thinking about it much more because it does feel only tangentially related.
That being said, many times I have explicitly thought that I was thinking about a thing that was not really related to the bigger problems I wanted to be working on, only to later see a stronger connection.
That was wrong. Fixed it. Thanks.
I think the comments here point out just how much we do not have common knowledge about this thing that we are pretending we have common knowledge about.
The FLI Beneficial AI workshop and the CHAI annual workshops have both been under the Chatham House Rule, for example. I don’t know about outside of AI safety.
I agree that most people do not expect the rules to be treated as sacred. I still want the rules to be such that someone could (without great cost) treat them as sacred if they wanted to.
That or it should be explicitly stated that you are only expected to loosely follow the spirit of the rule.
There is this: https://github.com/machine-intelligence/provability
I might post something else later this month, but if not, my submission is my new Prisoners’ Dilemma thing.
I am sad that the karma needed to suggest curation is exactly the same as to moderate. I want more goalposts, not fewer
Mine was in the text editor. Even in the text editor, Cmd 4 sends me to my 4th tab in the window, instead of entering latex.
I think that we should schedule a video chat. I might have a lot of content for you. Email me?
I dont know but Ctrl-Cmd-4 did work
Command 4 does not work on safari....:(
Meta: The word count is very off on this post. I currently see it as 73K. I am not sure what happened, but I believe:
I made a small edit.
I pressed submit.
It appeared that nothing happened.
I pressed submit a bunch of times.
It still appeared that nothing happened.
I went back, and looked at the post.
The edit was made, but the word count became huge.
I think you want to reward output rather than output that would not have otherwise happened.
This is similar to the fact that if you want to train calibration, you have to optimize you log score and just observe your lack of calibration as an opportunity to increase your log score.
Maybe in the form of a LW sequence.
It seems like maybe there should be an archive page for past rounds.