Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
AnnaSalamon
Karma:
17,263
All
Posts
Comments
New
Top
Old
Page
1
Believing In
AnnaSalamon
8 Feb 2024 7:06 UTC
202
points
49
comments
13
min read
LW
link
[Question]
Which parts of the existing internet are already likely to be in (GPT-5/other soon-to-be-trained LLMs)’s training corpus?
AnnaSalamon
29 Mar 2023 5:17 UTC
49
points
2
comments
1
min read
LW
link
[Question]
Are there specific books that it might slightly help alignment to have on the internet?
AnnaSalamon
29 Mar 2023 5:08 UTC
78
points
25
comments
1
min read
LW
link
What should you change in response to an “emergency”? And AI risk
AnnaSalamon
18 Jul 2022 1:11 UTC
328
points
60
comments
6
min read
LW
link
1
review
Comment reply: my low-quality thoughts on why CFAR didn’t get farther with a “real/efficacious art of rationality”
AnnaSalamon
9 Jun 2022 2:12 UTC
253
points
62
comments
17
min read
LW
link
1
review
Narrative Syncing
AnnaSalamon
1 May 2022 1:48 UTC
117
points
48
comments
7
min read
LW
link
1
review
The feeling of breaking an Overton window
AnnaSalamon
17 Feb 2021 5:31 UTC
128
points
29
comments
1
min read
LW
link
1
review
“PR” is corrosive; “reputation” is not.
AnnaSalamon
14 Feb 2021 3:32 UTC
308
points
93
comments
2
min read
LW
link
3
reviews
[Question]
Where do (did?) stable, cooperative institutions come from?
AnnaSalamon
3 Nov 2020 22:14 UTC
150
points
72
comments
4
min read
LW
link
Reality-Revealing and Reality-Masking Puzzles
AnnaSalamon
16 Jan 2020 16:15 UTC
258
points
57
comments
13
min read
LW
link
1
review
We run the Center for Applied Rationality, AMA
AnnaSalamon
19 Dec 2019 16:34 UTC
108
points
324
comments
1
min read
LW
link
AnnaSalamon’s Shortform
AnnaSalamon
25 Jul 2019 5:24 UTC
20
points
12
comments
1
min read
LW
link
“Flinching away from truth” is often about *protecting* the epistemology
AnnaSalamon
20 Dec 2016 18:39 UTC
222
points
58
comments
7
min read
LW
link
Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality”
AnnaSalamon
12 Dec 2016 19:39 UTC
64
points
38
comments
5
min read
LW
link
CFAR’s new mission statement (on our website)
AnnaSalamon
10 Dec 2016 8:37 UTC
15
points
14
comments
1
min read
LW
link
(www.rationality.org)
CFAR’s new focus, and AI Safety
AnnaSalamon
3 Dec 2016 18:09 UTC
51
points
88
comments
3
min read
LW
link
On the importance of Less Wrong, or another single conversational locus
AnnaSalamon
27 Nov 2016 17:13 UTC
173
points
365
comments
4
min read
LW
link
Several free CFAR summer programs on rationality and AI safety
AnnaSalamon
14 Apr 2016 2:35 UTC
30
points
14
comments
2
min read
LW
link
Consider having sparse insides
AnnaSalamon
1 Apr 2016 0:07 UTC
26
points
25
comments
1
min read
LW
link
The correct response to uncertainty is *not* half-speed
AnnaSalamon
15 Jan 2016 22:55 UTC
258
points
45
comments
3
min read
LW
link
Back to top
Next