RSS

Nathan1123

Karma: 40

[Question] How do I know if my first post should be a post, or a ques­tion?

Nathan11234 Aug 2022 1:46 UTC
3 points
4 comments1 min readLW link

Deon­tol­ogy and Tool AI

Nathan11235 Aug 2022 5:20 UTC
4 points
5 comments6 min readLW link

New­comb­ness of the Din­ing Philoso­phers Problem

Nathan11236 Aug 2022 21:58 UTC
10 points
2 comments2 min readLW link

[Question] How would Log­i­cal De­ci­sion The­o­ries ad­dress the Psy­chopath But­ton?

Nathan11237 Aug 2022 15:19 UTC
5 points
33 comments1 min readLW link

[Question] Are ya win­ning, son?

Nathan11239 Aug 2022 0:06 UTC
14 points
13 comments2 min readLW link

[Question] How would two su­per­in­tel­li­gent AIs in­ter­act, if they are un­al­igned with each other?

Nathan11239 Aug 2022 18:58 UTC
4 points
6 comments1 min readLW link

[Question] Do ad­vance­ments in De­ci­sion The­ory point to­wards moral ab­solutism?

Nathan112311 Aug 2022 0:59 UTC
0 points
4 comments4 min readLW link

Dis­sected boxed AI

Nathan112312 Aug 2022 2:37 UTC
−8 points
2 comments1 min readLW link

In­fant AI Scenario

Nathan112312 Aug 2022 21:20 UTC
1 point
0 comments3 min readLW link

An Un­canny Prison

Nathan112313 Aug 2022 21:40 UTC
3 points
3 comments2 min readLW link

[Question] What is the prob­a­bil­ity that a su­per­in­tel­li­gent, sen­tient AGI is ac­tu­ally in­fea­si­ble?

Nathan112314 Aug 2022 22:41 UTC
−3 points
6 comments1 min readLW link

[Question] Could the simu­la­tion ar­gu­ment also ap­ply to dreams?

Nathan112317 Aug 2022 19:55 UTC
6 points
4 comments3 min readLW link

[Question] Is there any liter­a­ture on us­ing so­cial­iza­tion for AI al­ign­ment?

Nathan112319 Apr 2023 22:16 UTC
10 points
9 comments2 min readLW link