Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Attainable Utility Theory: Why Things Matter
TurnTrout
27 Sep 2019 16:48 UTC
LW: 65 AF: 23
24
comments
1
min read
LW
link
Impact Regularization
World Modeling
Post permalink
Link without comments
Link without top nav bars
Link without comments or top nav bars
If you haven’t read the prior posts, please do so now. This sequence can be spoiled.
¯\_(ツ)_/¯
What links here?
Non-Obstruction: A Simple Concept Motivating Corrigibility by
TurnTrout
(
21 Nov 2020 19:35 UTC
; 74 points)
Reasons for Excitement about Impact of Impact Measure Research by
TurnTrout
(
27 Feb 2020 21:42 UTC
; 33 points)
Rohin Shah
's comment on AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah by
Palus Astra
(
14 May 2020 20:33 UTC
; 13 points)
Kaj_Sotala
's comment on World State is the Wrong Abstraction for Impact by
TurnTrout
(
2 Oct 2019 14:38 UTC
; 10 points)
TurnTrout
's comment on Conclusion to ‘Reframing Impact’ by
TurnTrout
(
16 May 2020 17:32 UTC
; 9 points)
Rohin Shah
's comment on Conclusion to ‘Reframing Impact’ by
TurnTrout
(
24 May 2020 20:55 UTC
; 8 points)
TurnTrout
's comment on Avoiding Side Effects in Complex Environments by
TurnTrout
(
15 Dec 2020 23:38 UTC
; 5 points)
TurnTrout
's comment on Open & Welcome Thread—February 2020 by
ryan_b
(
10 Feb 2020 15:27 UTC
; 2 points)
TurnTrout
27 Sep 2019 16:48 UTC
LW: 65 AF: 23
24
comments
1
min read
LW
link
Impact Regularization
World Modeling
Post permalink
Link without comments
Link without top nav bars
Link without comments or top nav bars
Part of the sequence:
Reframing Impact
Previous:
Deducing Impact
Next:
World State is the Wrong Abstraction for Impact
Back to top
Attainable Utility Theory: Why Things Matter
If you haven’t read the prior posts, please do so now. This sequence can be spoiled.
¯\_(ツ)_/¯