RSS

Lauro Langosco

Karma: 536

https://​​www.laurolangosco.com/​​

Long-Term Fu­ture Fund Ask Us Any­thing (Septem­ber 2023)

31 Aug 2023 0:28 UTC
33 points
6 comments1 min readLW link
(forum.effectivealtruism.org)

Lauro Lan­gosco’s Shortform

Lauro Langosco16 Jun 2023 22:17 UTC
4 points
4 comments1 min readLW link

An Ex­er­cise to Build In­tu­itions on AGI Risk

Lauro Langosco7 Jun 2023 18:35 UTC
52 points
3 comments8 min readLW link

Uncer­tainty about the fu­ture does not im­ply that AGI will go well

Lauro Langosco1 Jun 2023 17:38 UTC
62 points
11 comments7 min readLW link

Re­search Direc­tion: Be the AGI you want to see in the world

5 Feb 2023 7:15 UTC
43 points
0 comments7 min readLW link

Some rea­sons why a pre­dic­tor wants to be a consequentialist

Lauro Langosco15 Apr 2022 15:02 UTC
23 points
16 comments5 min readLW link

Align­ment re­searchers, how use­ful is ex­tra com­pute for you?

Lauro Langosco19 Feb 2022 15:35 UTC
8 points
4 comments1 min readLW link

[Question] What al­ign­ment-re­lated con­cepts should be bet­ter known in the broader ML com­mu­nity?

Lauro Langosco9 Dec 2021 20:44 UTC
6 points
4 comments1 min readLW link

Dis­cus­sion: Ob­jec­tive Ro­bust­ness and In­ner Align­ment Terminology

23 Jun 2021 23:25 UTC
73 points
7 comments9 min readLW link

Em­piri­cal Ob­ser­va­tions of Ob­jec­tive Ro­bust­ness Failures

23 Jun 2021 23:23 UTC
63 points
5 comments9 min readLW link