Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Lauro Langosco
Karma:
536
https://www.laurolangosco.com/
All
Posts
Comments
New
Top
Old
Uncertainty about the future does not imply that AGI will go well
Lauro Langosco
1 Jun 2023 17:38 UTC
62
points
11
comments
7
min read
LW
link
An Exercise to Build Intuitions on AGI Risk
Lauro Langosco
7 Jun 2023 18:35 UTC
52
points
3
comments
8
min read
LW
link
Some reasons why a predictor wants to be a consequentialist
Lauro Langosco
15 Apr 2022 15:02 UTC
23
points
16
comments
5
min read
LW
link
Alignment researchers, how useful is extra compute for you?
Lauro Langosco
19 Feb 2022 15:35 UTC
8
points
4
comments
1
min read
LW
link
[Question]
What alignment-related concepts should be better known in the broader ML community?
Lauro Langosco
9 Dec 2021 20:44 UTC
6
points
4
comments
1
min read
LW
link
Back to top