The Creating Bob the Jerk problem. Is it a real problem in decision theory?

I was recently reading about the Transparent Newcomb with your Existence at Stake problem, which, to make a long story short, states that you were created by Prometheus, who foresaw that you would one-box on Newcomb’s problem and wouldn’t have created you if he had foreseen otherwise. The implication is that you might need to one-box just to exist. It’s a disturbing problem, and as I read it another even more disturbing problem started to form in my head. However, I’m not sure it’s logically coherent (I’m really hoping it’s not) and wanted to know what the rest of you thought. The problem goes:

One day you start thinking about a hypothetical nonexistant person named Bob who is a real jerk. If he existed he would make your life utterly miserable. However, if he existed he would want to make a deal with you. If he ever found himself existing in a universe where you have never existed he would create you, on the condition that if you found yourself existing in a universe where he had never existed you would create him. Hypothetical Bob is very good at predicting the behavior of other people, not quite Omega quality, but pretty darn good. Assume for the sake of the argument that you like your life and enjoy existing.

At first you dismiss the problem because of technical difficulties. Science hasn’t advanced to the point where we can make people with such precision. Plus, there is a near-infinite number of far nicer hypothetical people who would make the same deal, when science reaches that point you should give creating them priority.

But then you see Omega drive by in its pickup truck. A large complicated machine falls off the back of the truck as it passes you by. Written on it, in Omega’s handwriting, is a note that says “This is the machine that will create Bob the Jerk, a hypothetical person that [insert your name here] has been thinking about recently, if one presses the big red button on the side.” You know Omega never lies, not even in notes to itself.

Do Timeless Decision Theory and Updateless Decision Theory say you have a counterfactual obligation to create Bob the Jerk, the same way you have an obligation to pay Omega in the Counterfactual Mugging, and the same way you might (I’m still not sure about this) have an obligation to one-box when dealing with Prometheus? Does this in turn mean that when we develop the ability to create people from scratch we should tile the universe with people who would make the counterfactual deal? Obviously it’s that last implication that disturbs me.

I can think of multiple reasons why it might not be rational to create Bob the Jerk:

  • It might not be logically coherent to not update to acknowledge the fact of your own existence, even in UDT (this also implies one should two-box when dealing with Prometheus).

  • An essential part of who you are is the fact that you were created by your parents, not by Bob the Jerk, so the counterfactual deal isn’t logically coherent. Someone he creates wouldn’t be you, it would be someone else. At his very best he could create someone with a very similar personality who has falsified memories, which would be rather horrifying.

  • An essential part of who Bob the Jerk is is that he was created by you, with some help from Omega. He can’t exist in a universe where you don’t, so the hypothetical bargain he offered you isn’t logically coherent.

  • Prometheus will exist no matter what you do in his problem, Bob the Jerk won’t. This makes these two problems qualitatively different in some way I don’t quite understand.

  • You have a moral duty to not inflict Bob the Jerk on others, even if it means you don’t exist in some other possibility.

  • You have a moral duty to not overpopulate the world, even if it means you might not exist in some other possibility, and the end result of the logic of this problem implies overpopulating the world.

  • Bob the Jerk already exists because we live in a Big World, so you have no need to fulfill your part of the bargain because he’s already out there somewhere.

  • Making these sorts of counterfactual deals is individually rational, but collectively harmful in the same way that paying a ransom is. If you create Bob the Jerk some civic-minded vigilante decision theorist might see the implications and find some way to punish you.

  • While it is possible to want to keep on existing if you already exist, it isn’t logically possible to “want to exist” if you don’t already, this defeats the problem in some way.

  • After some thought you spend some time thinking about a hypothetical individual called Bizarro-Bob. Bizarro-Bob doesn’t want Bob the Jerk to be created and is just as good at modeling your behavior as Bob the Jerk is. He has vowed that if he ends up existing in a universe where you’ll end up creating Bob the Jerk he’ll kill you. As you stand by Omega’s machine you start looking around anxiously for the glint of light off a gun barrel.

  • I don’t understand UDT or TDT properly, they don’t imply I should create Bob the Jerk for some other reason I haven’t thought of because of my lack of understanding.

Are any of these objections valid, or am I just grasping at straws? I find the problem extremely disturbing because of its wider implications, so I’d appreciate it if someone with a better grasp of UDT and TDT analyzed it. I’d very much like to be refuted.