Alternate Alignment Ideasabramdemski15 May 2019 17:22 UTCThese are ‘brainstorming’ posts, around the theme of what it means for a system to be helpful to a human.Stable Pointers to Value: An Agent Embedded in Its Own Utility Functionabramdemski17 Aug 2017 0:22 UTC24 points9 comments5 min readLW linkStable Pointers to Value II: Environmental Goalsabramdemski9 Feb 2018 6:03 UTC19 points3 comments4 min readLW linkStable Pointers to Value III: Recursive Quantilizationabramdemski21 Jul 2018 8:06 UTC20 points4 comments4 min readLW linkPolicy Alignmentabramdemski30 Jun 2018 0:24 UTC54 points25 comments8 min readLW linkNon-Consequentialist Cooperation?abramdemski11 Jan 2019 9:15 UTC50 points15 comments7 min readLW link