The Poin­t­ers Problem

TagLast edit: 31 May 2021 18:30 UTC by Multicore

The pointers problem refers to the fact that most humans would rather have an AI that acts based on real-world human values, not just human estimates of their own values – and that the two will be different in many situations, since humans are not all-seeing or all-knowing.It was introduced in a post with the same name.

The Poin­t­ers Prob­lem: Hu­man Values Are A Func­tion Of Hu­mans’ La­tent Variables

johnswentworth18 Nov 2020 17:47 UTC
85 points
40 comments11 min readLW link2 reviews

Stable Poin­t­ers to Value II: En­vi­ron­men­tal Goals

abramdemski9 Feb 2018 6:03 UTC
18 points
2 comments4 min readLW link

Stable Poin­t­ers to Value III: Re­cur­sive Quantilization

abramdemski21 Jul 2018 8:06 UTC
19 points
4 comments4 min readLW link

Stable Poin­t­ers to Value: An Agent Embed­ded in Its Own Utility Function

abramdemski17 Aug 2017 0:22 UTC
15 points
3 comments5 min readLW link

Ro­bust Delegation

4 Nov 2018 16:38 UTC
110 points
10 comments1 min readLW link

[In­tro to brain-like-AGI safety] 9. Take­aways from neuro 2/​2: On AGI motivation

Steven Byrnes23 Mar 2022 12:48 UTC
23 points
5 comments23 min readLW link

Up­dat­ing Utility Functions

9 May 2022 9:44 UTC
35 points
7 comments8 min readLW link

The Poin­t­ers Prob­lem—Distilled

NinaR26 May 2022 22:44 UTC
8 points
0 comments2 min readLW link
No comments.