Karma: 17

# [Question] Are ya win­ning, son?

9 Aug 2022 0:06 UTC
14 points

# New­comb­ness of the Din­ing Philoso­phers Problem

6 Aug 2022 21:58 UTC
10 points

# [Question] How would Log­i­cal De­ci­sion The­o­ries ad­dress the Psy­chopath But­ton?

7 Aug 2022 15:19 UTC
5 points

# Deon­tol­ogy and Tool AI

5 Aug 2022 5:20 UTC
4 points

# [Question] How would two su­per­in­tel­li­gent AIs in­ter­act, if they are un­al­igned with each other?

9 Aug 2022 18:58 UTC
4 points

# [Question] How do I know if my first post should be a post, or a ques­tion?

4 Aug 2022 1:46 UTC
3 points
• 9 Aug 2022 2:21 UTC
3 points
0 ∶ 0

C,C is second-best, you prefer D,C and Nash says D,D is all you should expect. C,C is definitely better than C,D or D,D, so in the special case of symmetrical decisions, it’s winning. It bugs me as much as you that this part gets glossed over so often.

I see what you mean, it works as long as both sides have roughly similar behavior.

Counterfactual Mugging is a win to pay off, in a universe where that sort of thing happens. You really do want to be correctly predicted to pay off, and enjoy the \$10K in those cases where the coin goes your way.

For me, this would make intuitive sense if there was something in the problem that implied that Omega does this on a regular basis, analogous to the Iterated Prisoner’s Dilemma. But as long as the problem is worded as a one-shot, once-in-a-lifetime scenario, then it comes off like the \$10,000 is purely fictitious.

# In­fant AI Scenario

12 Aug 2022 21:20 UTC
1 point
• That’s a very interesting and insightful dissection of the problem. Do you think there might be a problem in the post that I copied the thought experiment from (which said that CDT presses, and EDT doesn’t), or did I make a mistake of taking it out of context?

• Ok, if the button is thought of the “second agent” then I would guess TDT would not press the button. TDT would reason that the button will make the decision that the person who pressed the button is a psychopath, and therefore Paul should precommit to not press the button. Is that the right way to approach it?

• When it comes to AI regulation, a certain train of thought comes to my mind:

1. Because a superintelligent AI has never existed, we can assume that creating one requires an enormous amount of energy and resources.

2. Due to global inequality, certain regions of the world have exponentially more access to energy and resources than other regions.

3. Therefore, when creating an AGI becomes possible, only a couple of regions of the world (and only a small number of people in these regions) will have the capability of doing so.

Therefore, enforcement of AI regulations only has to focus on this very limited population, and educate them on the existential threat of UFAI.

I think it is best to consider it analogous with another man-made existential threat, nuclear weapons. True, there is always a concern of a leak in international regulations (the Soviet nuclear arsenal that disappeared with the fall of the USSR, for example), but generally speaking there is a great filter of cost (such as procuring and refining uranium, training and educating domestic nuclear research, etc.) such that only a handful of nations in the world have ever built such weapons.

# [Question] Do ad­vance­ments in De­ci­sion The­ory point to­wards moral ab­solutism?

11 Aug 2022 0:59 UTC
0 points