By the way:
Human: “What do you care about 3 paperclips? Haven’t you made trillions already? That’s like a rounding error!” Paperclip Maximizer: “How can you talk about paperclips like that?”
PM: “What do you care about a billion human algorithm continuities? You’ve got virtually the same one in billions of others! And you’ll even be able to embed the algorithm in machines one day!” H: “How can you talk about human lives that way?”
A few questions and comments:
1) What kind of dinner party was this? It’s great to expose non-rigorous beliefs, but was that the right place to show off your superiority? It seems you came off as having some inferiority complex, though obviously I wasn’t there. I know that if I’m at a party (of most types), for example, my first goal ain’t exactly to win philosophical arguments …
2) Why did you have to involve Aumann’s theorem? You caught him in a contradiction. The question of whether people can agree to disagree, at least it seems to me, is an unnecessary distraction. And for all he knows, you could just be making that up to intimidate him. And Aumann’s Theorem certainly doesn’t imply that, at any given moment, rectifying that particularly inconsistency is an optimal use of someone’s time.
3) It seems what he was really trying to say was someting along the lines of “while you could make an intelligence, its emotions would not be real the way humans’ are”. (“Submarines aren’t really swimming.”) I probably would have at least attempted to verify if that’s what he meant rather latching onto the most ridiculous meaning I could find.
4) I’ve had the same experience with people who fervently hold beliefs but don’t consider tests that could falsify them. In my case, it’s usually with people who insist that the true rate of inflation in the US is ~12%, all the time. I always ask, “so what basket of commodity futures can I buy that consistently makes 12% nominal?”