“Are specific humans currently being acausally blackmailed? Either by Roko’s idea or by something similar. This would be an empirical claim, and finding the humans in question would be the best approach.” If by this you mean ‘are there any humans who are attempting to appease the basilisk, regardless of whether it exists?’ then I would say yes. You could even argue that this post is my attempt to do so given the uncertainty and that I think it’s in the best interests of others who have thought about the argument in sufficient depth.
If you mean ‘Are there any humans actually being simulated by the basilisk, or existing in a form which will be simulated?’ , then I don’t claim to be able to settle this question in the post; what I do suggest is that this can’t be ruled out or consigned to negligible probabilities.
“Would it be rational for a non-human agent (because humans are not sufficiently well-modeled to answer this question of them) to change it’s behavior for this kind of acausal trade (trade and blackmail being indistinguishable in pure logic)?”
I think this would again depend upon whether the logic actually ‘works’ in the situation in which it would need to be thought, in addition to the utility function of the agent (For example if its utility function is symmetrical in the way I describe in the post, it might make sense to ignore the basilisk.)
I would also note that I expect a sufficiently more intelligent being than a human would have ways of preventing itself from thinking about many, although probably not all, acausal extortion scenarios before becoming entangled in the logic. (Though this is just a guess.)
“It seems to be about arguments, not about actual reality. I think it’s not quite the right approach for either question you might be asking” The thing is, in this case, arguments might have a way to influence ‘physical reality’ , so constraining oneself only to thinking about the latter might be a mistake, as I argue it is in the post. If you want to avoid thinking about these arguments, you might need to discard timeless decision theory.
Thanks for engaging with the post.
“Are specific humans currently being acausally blackmailed? Either by Roko’s idea or by something similar. This would be an empirical claim, and finding the humans in question would be the best approach.” If by this you mean ‘are there any humans who are attempting to appease the basilisk, regardless of whether it exists?’ then I would say yes. You could even argue that this post is my attempt to do so given the uncertainty and that I think it’s in the best interests of others who have thought about the argument in sufficient depth.
If you mean ‘Are there any humans actually being simulated by the basilisk, or existing in a form which will be simulated?’ , then I don’t claim to be able to settle this question in the post; what I do suggest is that this can’t be ruled out or consigned to negligible probabilities.
“Would it be rational for a non-human agent (because humans are not sufficiently well-modeled to answer this question of them) to change it’s behavior for this kind of acausal trade (trade and blackmail being indistinguishable in pure logic)?”
I think this would again depend upon whether the logic actually ‘works’ in the situation in which it would need to be thought, in addition to the utility function of the agent (For example if its utility function is symmetrical in the way I describe in the post, it might make sense to ignore the basilisk.)
I would also note that I expect a sufficiently more intelligent being than a human would have ways of preventing itself from thinking about many, although probably not all, acausal extortion scenarios before becoming entangled in the logic. (Though this is just a guess.)
“It seems to be about arguments, not about actual reality. I think it’s not quite the right approach for either question you might be asking” The thing is, in this case, arguments might have a way to influence ‘physical reality’ , so constraining oneself only to thinking about the latter might be a mistake, as I argue it is in the post. If you want to avoid thinking about these arguments, you might need to discard timeless decision theory.