I generally agree with Rob here (and I think it’s more useful for ai-crotes to engage with Rob and read the relevant sequence posts. My comment here assumes some sophisticated background, including reading the posts Rob suggested).
But, I’m not sure I agree with this paragraph as written. Some caveats:
I know at least one person who has made a conscious commitment to dedicate some of their eventual surplus resources (i.e. somewhere on the order of 1% of their post-singularity resources) to “try to figure out what evolution was trying to do when they created me, and do some of it.” (i.e. create a planet with tons of DNA in a pile, create copies of themselves, etc)
By being the sort of person who tries to understand what your creator was intending, and help said creator as best you can, you get access to more multiverse resources (across all possible creators).
[My own current position is that this sounds reasonable, but I have tons of philosophical uncertainty about it, and my own current commitment is something like “I promise to think hard about these issues if given more resources/compute and do the right thing.” But a hope is that by committing to that explicitly rather than incidentally, you can show up earlier on lower-resolution simulations]
I generally agree with Rob here (and I think it’s more useful for ai-crotes to engage with Rob and read the relevant sequence posts. My comment here assumes some sophisticated background, including reading the posts Rob suggested).
But, I’m not sure I agree with this paragraph as written. Some caveats:
I know at least one person who has made a conscious commitment to dedicate some of their eventual surplus resources (i.e. somewhere on the order of 1% of their post-singularity resources) to “try to figure out what evolution was trying to do when they created me, and do some of it.” (i.e. create a planet with tons of DNA in a pile, create copies of themselves, etc)
This is not because you can cooperate with evolution-in-particular, but as part of a general strategy of maximizing your values across universes, including simulations. (ie. Beyond Astronomical Waste). For example “be the sort of agent that, if an engineer was white-boarding out your decision-making, they can see that you robustly cooperate in appropriate situations, including if the engineers failed to give you the values that they were trying to give you.”
By being the sort of person who tries to understand what your creator was intending, and help said creator as best you can, you get access to more multiverse resources (across all possible creators).
[My own current position is that this sounds reasonable, but I have tons of philosophical uncertainty about it, and my own current commitment is something like “I promise to think hard about these issues if given more resources/compute and do the right thing.” But a hope is that by committing to that explicitly rather than incidentally, you can show up earlier on lower-resolution simulations]