RSS

[Question] Ex­am­ples of prac­ti­cal im­pli­ca­tions of Judea Pearl’s Causal­ity work

ChristianKl1 Jul 2022 20:58 UTC
20 points
5 comments1 min readLW link

[Question] What about tran­shu­mans and be­yond?

AlignmentMirror2 Jul 2022 13:58 UTC
2 points
0 comments1 min readLW link

[Question] How to Nav­i­gate Eval­u­at­ing Poli­ti­cized Re­search?

Davis_Kingsley1 Jul 2022 5:59 UTC
11 points
1 comment1 min readLW link

[Question] Are long-form dat­ing pro­files pro­duc­tive?

AABoyles27 Jun 2022 17:03 UTC
33 points
29 comments1 min readLW link

[Question] What’s the con­tin­gency plan if we get AGI to­mor­row?

Yitz23 Jun 2022 3:10 UTC
61 points
24 comments1 min readLW link

[Question] AGI al­ign­ment with what?

AlignmentMirror1 Jul 2022 10:22 UTC
6 points
7 comments1 min readLW link

[Question] Is there any way some­one could post about pub­lic policy re­lat­ing to abor­tion ac­cess (or an­other sen­si­tive sub­ject) on LessWrong with­out get­ting su­per down­voted?

Evan_Gaensbauer28 Jun 2022 5:45 UTC
18 points
20 comments1 min readLW link

[Question] What is the con­trast to coun­ter­fac­tual rea­son­ing?

Dominic Roser1 Jul 2022 7:39 UTC
4 points
4 comments1 min readLW link

[Question] why as­sume AGIs will op­ti­mize for fixed goals?

nostalgebraist10 Jun 2022 1:28 UTC
91 points
52 comments4 min readLW link

[Question] How should I talk about op­ti­mal but not sub­game-op­ti­mal play?

JamesFaville30 Jun 2022 13:58 UTC
5 points
1 comment3 min readLW link

[Question] Do al­ign­ment con­cerns ex­tend to pow­er­ful non-AI agents?

Ozyrus24 Jun 2022 18:26 UTC
21 points
13 comments1 min readLW link

[Question] Is CIRL a promis­ing agenda?

Chris_Leong23 Jun 2022 17:12 UTC
24 points
12 comments1 min readLW link

[Question] Cor­rect­ing hu­man er­ror vs do­ing ex­actly what you’re told—is there liter­a­ture on this in con­text of gen­eral sys­tem de­sign?

Jan Czechowski29 Jun 2022 21:30 UTC
6 points
0 comments1 min readLW link

[Question] How to deal with non-schedu­la­ble one-off stim­u­lus-re­sponse-pair-like situ­a­tions when plan­ning/​or­ganis­ing pro­jects?

mikbp1 Jul 2022 15:22 UTC
2 points
1 comment1 min readLW link

[Question] What is the LessWrong Logo(?) Sup­posed to Rep­re­sent?

DragonGod28 Jun 2022 20:20 UTC
8 points
6 comments1 min readLW link

[Question] What is Go­ing On With CFAR?

niplav28 May 2022 15:21 UTC
93 points
35 comments1 min readLW link

[Question] Why don’t we think we’re in the sim­plest uni­verse with in­tel­li­gent life?

ADifferentAnonymous18 Jun 2022 3:05 UTC
29 points
32 comments1 min readLW link

[Question] Has any­one ac­tu­ally tried to con­vince Terry Tao or other top math­e­mat­i­ci­ans to work on al­ign­ment?

P.8 Jun 2022 22:26 UTC
49 points
46 comments4 min readLW link

[Question] How do I use caf­feine op­ti­mally?

randomstring22 Jun 2022 17:59 UTC
18 points
31 comments1 min readLW link

[Question] What’s the “This AI is of moral con­cern.” fire alarm?

Quintin Pope13 Jun 2022 8:05 UTC
36 points
58 comments2 min readLW link