RSS

[Question] What about tran­shu­mans and be­yond?

AlignmentMirror2 Jul 2022 13:58 UTC
2 points
3 comments1 min readLW link

[Question] Ex­am­ples of prac­ti­cal im­pli­ca­tions of Judea Pearl’s Causal­ity work

ChristianKl1 Jul 2022 20:58 UTC
20 points
5 comments1 min readLW link

[Question] How to deal with non-schedu­la­ble one-off stim­u­lus-re­sponse-pair-like situ­a­tions when plan­ning/​or­ganis­ing pro­jects?

mikbp1 Jul 2022 15:22 UTC
2 points
1 comment1 min readLW link

[Question] What is the con­trast to coun­ter­fac­tual rea­son­ing?

Dominic Roser1 Jul 2022 7:39 UTC
4 points
4 comments1 min readLW link

[Question] AGI al­ign­ment with what?

AlignmentMirror1 Jul 2022 10:22 UTC
6 points
7 comments1 min readLW link

[Question] Do you con­sider your cur­rent, non-su­per­hu­man self al­igned with “hu­man­ity” already?

Rana Dexsin25 Jun 2022 4:15 UTC
10 points
19 comments1 min readLW link

[Question] Cry­on­ics-ad­ja­cent question

Flaglandbase30 Jun 2022 23:03 UTC
2 points
1 comment1 min readLW link

[Question] How to Nav­i­gate Eval­u­at­ing Poli­ti­cized Re­search?

Davis_Kingsley1 Jul 2022 5:59 UTC
11 points
1 comment1 min readLW link

[Question] What’s the goal in life?

Konstantin Weitz18 Jun 2022 6:09 UTC
4 points
6 comments1 min readLW link

[Question] How would pub­lic me­dia out­lets need to be gov­erned to cover all poli­ti­cal views?

ChristianKl12 May 2022 12:55 UTC
13 points
14 comments1 min readLW link

[Question] How should I talk about op­ti­mal but not sub­game-op­ti­mal play?

JamesFaville30 Jun 2022 13:58 UTC
5 points
1 comment3 min readLW link

[Question] Are long-form dat­ing pro­files pro­duc­tive?

AABoyles27 Jun 2022 17:03 UTC
33 points
29 comments1 min readLW link

[Question] What is the LessWrong Logo(?) Sup­posed to Rep­re­sent?

DragonGod28 Jun 2022 20:20 UTC
8 points
6 comments1 min readLW link

[Question] Cor­rect­ing hu­man er­ror vs do­ing ex­actly what you’re told—is there liter­a­ture on this in con­text of gen­eral sys­tem de­sign?

Jan Czechowski29 Jun 2022 21:30 UTC
6 points
0 comments1 min readLW link

[Question] Should any hu­man en­slave an AGI sys­tem?

AlignmentMirror25 Jun 2022 19:35 UTC
−15 points
44 comments1 min readLW link

[Question] What is the typ­i­cal course of COVID-19? What are the var­i­ants?

Elizabeth9 Mar 2020 17:52 UTC
36 points
29 comments1 min readLW link

[Question] Do al­ign­ment con­cerns ex­tend to pow­er­ful non-AI agents?

Ozyrus24 Jun 2022 18:26 UTC
21 points
13 comments1 min readLW link

[Question] Is there any way some­one could post about pub­lic policy re­lat­ing to abor­tion ac­cess (or an­other sen­si­tive sub­ject) on LessWrong with­out get­ting su­per down­voted?

Evan_Gaensbauer28 Jun 2022 5:45 UTC
18 points
20 comments1 min readLW link

[Question] Liter­a­ture on How to Max­i­mize Preferences

josh28 Jun 2022 22:41 UTC
1 point
0 comments1 min readLW link

[Question] Why Are Posts in the Se­quences Tagged [Per­sonal Blog] In­stead of [Front­page]?

DragonGod27 Jun 2022 9:35 UTC
4 points
2 comments1 min readLW link