Alternate Alignment Ideasabramdemski15 May 2019 17:22 UTCThese are ‘brainstorming’ posts, around the theme of what it means for a system to be helpful to a human.Stable Pointers to Value: An Agent Embedded in Its Own Utility Functionabramdemski17 Aug 2017 0:22 UTC15 points9 comments5 min readLW linkStable Pointers to Value II: Environmental Goalsabramdemski9 Feb 2018 6:03 UTC19 points2 comments4 min readLW linkStable Pointers to Value III: Recursive Quantilizationabramdemski21 Jul 2018 8:06 UTC20 points4 comments4 min readLW linkPolicy Alignmentabramdemski30 Jun 2018 0:24 UTC50 points25 comments8 min readLW linkNon-Consequentialist Cooperation?abramdemski11 Jan 2019 9:15 UTC49 points15 comments7 min readLW link