Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Chris_Leong
Karma:
6,622
All
Posts
Comments
New
Top
Old
Page
1
Decoupling vs Contextualising Norms
Chris_Leong
14 May 2018 22:44 UTC
155
points
51
comments
2
min read
LW
link
3
reviews
Don’t Dismiss Simple Alignment Approaches
Chris_Leong
7 Oct 2023 0:35 UTC
127
points
8
comments
4
min read
LW
link
Notice When People Are Directionally Correct
Chris_Leong
14 Jan 2024 14:12 UTC
121
points
7
comments
2
min read
LW
link
On Destroying the World
Chris_Leong
28 Sep 2020 7:38 UTC
78
points
86
comments
5
min read
LW
link
Challenges with Breaking into MIRI-Style Research
Chris_Leong
17 Jan 2022 9:23 UTC
75
points
15
comments
3
min read
LW
link
The World According to Dominic Cummings
Chris_Leong
14 Apr 2020 5:05 UTC
69
points
14
comments
7
min read
LW
link
Google “We Have No Moat, And Neither Does OpenAI”
Chris_Leong
4 May 2023 18:23 UTC
61
points
28
comments
1
min read
LW
link
(www.semianalysis.com)
[Question]
What’s the theory of impact for activation vectors?
Chris_Leong
11 Feb 2024 7:34 UTC
57
points
12
comments
1
min read
LW
link
Interviews on Improving the AI Safety Pipeline
Chris_Leong
7 Dec 2021 12:03 UTC
55
points
15
comments
17
min read
LW
link
[Question]
Can we get an AI to do our alignment homework for us?
Chris_Leong
26 Feb 2024 7:56 UTC
53
points
33
comments
1
min read
LW
link
The Hammer and the Dance
Chris_Leong
20 Mar 2020 16:09 UTC
48
points
5
comments
1
min read
LW
link
(medium.com)
Should rationality be a movement?
Chris_Leong
20 Jun 2019 23:09 UTC
48
points
13
comments
3
min read
LW
link
General Thoughts on Less Wrong
Chris_Leong
3 Apr 2022 4:09 UTC
44
points
14
comments
2
min read
LW
link
The Sense-Making Web
Chris_Leong
4 Jan 2021 6:17 UTC
41
points
21
comments
6
min read
LW
link
[Question]
Why is Bayesianism important for rationality?
Chris_Leong
1 Sep 2020 4:24 UTC
37
points
24
comments
1
min read
LW
link
$1000 USD prize—Circular Dependency of Counterfactuals
Chris_Leong
1 Jan 2022 9:43 UTC
37
points
102
comments
4
min read
LW
link
Yann LeCun on AGI and AI Safety
Chris_Leong
6 Aug 2023 21:56 UTC
37
points
13
comments
1
min read
LW
link
(drive.google.com)
No option to report spam
Chris_Leong
3 Dec 2018 13:40 UTC
37
points
13
comments
1
min read
LW
link
[Question]
What does the launch of x.ai mean for AI Safety?
Chris_Leong
12 Jul 2023 19:42 UTC
35
points
3
comments
1
min read
LW
link
[Question]
How strong is the evidence for hydroxychloroquine?
Chris_Leong
5 Apr 2020 9:32 UTC
35
points
14
comments
1
min read
LW
link
Back to top
Next