Thanks! Reading this comment makes me very happy, because it seems like you are now in a similar headspace to me back in the day. Writing this post was my response to being in this headspace.
But… I dunno man. I figured the first rule of Acausal Trade was “build a galaxy brain and think really goddamn carefully about acausal trade and philosophical competence” before you actually try simulating anything, and I’m skeptical a galaxy brain can’t figure out the right precommitments.
This sounds like a plausibly good rule to me. But that doesn’t mean that every AI we build will automatically follow it. Moreover, thinking about acausal trade is in some sense engaging in acausal trade. As I put it:
Since real agents can’t be logically omniscient, one needs to decide how much time to spend thinking about things like game theory and what the outputs of various programs are before making commitments. When we add acausal bargaining into the mix, things get even more intense. Scott Garrabrant, Wei Dai, and Abram Demski have described this problem already, so I won’t say more about that here. Basically, in this context, there are many other people observing your thoughts and making decisions on that basis. So bluffing is impossible and there is constant pressure to make commitments quickly before thinking longer. (That’s my take on it anyway)
As for your handwavy proposals, I do agree that they are pretty good. They are somewhat similar to the proposals I favor, in fact. But these are just specific proposals in a big space of possible strategies, and (a) we have reason to think there might be flaws in these proposals that we haven’t discovered yet, and (b) even if these proposals work perfectly there’s still the problem of making sure that our AI follows them:
Objection: “Surely they wouldn’t be so stupid as to make those commitments—even I could see that bad outcome coming. A better commitment would be...”
Reply: The problem is that consequentialist agents are motivated to make commitments as soon as possible, since that way they can influence the behavior of other consequentialist agents who may be learning about them. Of course, they will balance these motivations against the countervailing motive to learn more and think more before doing drastic things. The problem is that the first motivation will push them to make commitments much sooner than would otherwise be optimal. So they might not be as smart as us when they make their commitments, at least not in all the relevant ways. Even if our baby AGIs are wiser than us, they might still make mistakes that we haven’t anticipated yet. The situation is like the centipede game: Collectively, consequentialist agents benefit from learning more about the world and each other before committing to things. But because they are all bullies and cowards, they individually benefit from committing earlier, when they don’t know so much.
If you want to think and talk more about this, I’d be very interested to hear your thoughts. Unfortunately, while my estimate of the commitment races problem’s importance has only increased over the past year, I haven’t done much to actually make intellectual progress on it.
Yeah I’m interested in chatting about this.
I feel I should disclaim “much of what I’d have to say about this is a watered down version of whatever Andrew Critch would say”. He’s busy a lot, but if you haven’t chatted with him about this yet you probably should, and if you have I’m not sure whether I’ll have much to add.
But I am pretty interested right now in fleshing out my own coordination principles and fleshing out my understanding of how they scale up from “200 human rationalists” to 1000-10,000 sized coalitions to All Humanity and to AGI and beyond. I’m currently working on a sequence that could benefit from chatting with other people who think seriously about this.