Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Hide coronavirus posts
RSS
New
Hot
Active
Old
Page
1
Oxford Rationalish—Sept Pub with Lightning Talks ⚡️
Sam
18 Aug 2022 13:37 UTC
2
points
0
comments
1
min read
LW
link
Astral Codex Ten meetup in Prague [Oct 6]
Jiří Nádvorník
18 Aug 2022 12:15 UTC
2
points
0
comments
1
min read
LW
link
Playing Without Affordances
Alex Hollow
18 Aug 2022 11:53 UTC
5
points
0
comments
1
min read
LW
link
(alexhollow.wordpress.com)
Goal-directedness: relativising complexity
Morgan_Rogers
18 Aug 2022 9:48 UTC
2
points
0
comments
11
min read
LW
link
Bangalore—ACX Meetups Everywhere ’22 + September Social
Nihalm
18 Aug 2022 8:10 UTC
1
point
0
comments
1
min read
LW
link
What’s up with the bad Meta projects?
Yitz
18 Aug 2022 5:34 UTC
16
points
8
comments
1
min read
LW
link
Seattle ACX Everywhere—October 2022
Nikita Sokolsky
18 Aug 2022 5:13 UTC
3
points
0
comments
1
min read
LW
link
Astral Codex Ten Meetups Everywhere: Boston 2022
robirahman
18 Aug 2022 3:48 UTC
2
points
0
comments
1
min read
LW
link
Announcing Encultured AI: Building a Video Game
Andrew_Critch
and
Nick Hay
18 Aug 2022 2:16 UTC
52
points
5
comments
4
min read
LW
link
Detroit ACX September Meetup
MattArnold
18 Aug 2022 0:48 UTC
1
point
0
comments
1
min read
LW
link
Matt Yglesias on AI Policy
Grant Demaree
17 Aug 2022 23:57 UTC
21
points
0
comments
1
min read
LW
link
(www.slowboring.com)
Spoons and Myofascial Trigger Points
vitaliya
17 Aug 2022 22:54 UTC
2
points
1
comment
1
min read
LW
link
Concrete Advice for Forming Inside Views on AI Safety
Neel Nanda
17 Aug 2022 22:02 UTC
16
points
1
comment
10
min read
LW
link
Meetups Everywhere 2022!
JS
17 Aug 2022 21:54 UTC
1
point
0
comments
1
min read
LW
link
Progress links and tweets, 2022-08-17
jasoncrawford
17 Aug 2022 21:27 UTC
11
points
0
comments
2
min read
LW
link
(rootsofprogress.org)
Conditioning, Prompts, and Fine-Tuning
Adam Jermyn
17 Aug 2022 20:52 UTC
23
points
1
comment
4
min read
LW
link
The Core of the Alignment Problem is...
Thomas Larsen
,
Jeremy Gillen
and
AtlasOfCharts
17 Aug 2022 20:07 UTC
26
points
4
comments
9
min read
LW
link
[Question]
Could the simulation argument also apply to dreams?
Nathan1123
17 Aug 2022 19:55 UTC
5
points
0
comments
3
min read
LW
link
Interpretability Tools Are an Attack Channel
Thane Ruthenis
17 Aug 2022 18:47 UTC
24
points
13
comments
1
min read
LW
link
Human Mimicry Mainly Works When We’re Already Close
johnswentworth
17 Aug 2022 18:41 UTC
42
points
7
comments
5
min read
LW
link
Back to top
Next