RSS

[Question] What’s the Right Way to think about In­for­ma­tion The­o­retic quan­tities in Neu­ral Net­works?

Dalcy19 Jan 2025 8:04 UTC
40 points
4 comments3 min readLW link

[Question] How likely is AGI to force us all to be happy for­ever? (much like in the Three Wor­lds Col­lide novel)

uhbif1918 Jan 2025 15:39 UTC
9 points
5 comments1 min readLW link

[Question] What’s Wrong With the Si­mu­la­tion Ar­gu­ment?

AynonymousPrsn12318 Jan 2025 2:32 UTC
3 points
31 comments1 min readLW link

[Question] What do you mean with ‘al­ign­ment is solv­able in prin­ci­ple’?

Remmelt17 Jan 2025 15:03 UTC
2 points
8 comments1 min readLW link

[Question] How Do You In­ter­pret the Goal of LessWrong and Its Com­mu­nity?

ashen846116 Jan 2025 19:08 UTC
−2 points
2 comments1 min readLW link

[Question] Where should one post to get into the train­ing data?

keltan15 Jan 2025 0:41 UTC
11 points
4 comments1 min readLW link

[Question] Why do fu­tur­ists care about the cul­ture war?

Knight Lee14 Jan 2025 7:35 UTC
14 points
18 comments2 min readLW link

[Question] AI for med­i­cal care for hard-to-treat dis­eases?

CronoDAS10 Jan 2025 23:55 UTC
12 points
0 comments1 min readLW link

[Question] What are some sce­nar­ios where an al­igned AGI ac­tu­ally helps hu­man­ity, but many/​most peo­ple don’t like it?

RomanS10 Jan 2025 18:13 UTC
11 points
6 comments3 min readLW link

[Question] Is Musk still net-pos­i­tive for hu­man­ity?

mikbp10 Jan 2025 9:34 UTC
−14 points
17 comments1 min readLW link

[Question] How do you de­cide to phrase pre­dic­tions you ask of oth­ers? (and how do you make your own?)

CstineSublime10 Jan 2025 2:44 UTC
7 points
0 comments2 min readLW link

[Question] How can hu­man­ity sur­vive a mul­ti­po­lar AGI sce­nario?

Leonard Holloway9 Jan 2025 20:17 UTC
13 points
8 comments2 min readLW link

[Question] What is the most im­pres­sive game LLMs can play well?

Cole Wyeth8 Jan 2025 19:38 UTC
18 points
14 comments1 min readLW link

[Question] Meal Re­place­ments in 2025?

alkjash6 Jan 2025 15:37 UTC
22 points
9 comments1 min readLW link

[Question] Is “hid­den com­plex­ity of wishes prob­lem” solved?

Roman Malov5 Jan 2025 22:59 UTC
10 points
4 comments1 min readLW link

[Question] Can pri­vate com­pa­nies test LVTs?

Yair Halberstadt2 Jan 2025 11:08 UTC
7 points
0 comments1 min readLW link

[Question] 2025 Align­ment Predictions

anaguma2 Jan 2025 5:37 UTC
3 points
3 comments1 min readLW link

[Question] Could my work, “Beyond HaHa” benefit the LessWrong com­mu­nity?

P. João29 Dec 2024 16:14 UTC
9 points
6 comments1 min readLW link

[Question] Has Some­one Checked The Cold-Water-In-Left-Ear Thing?

Maloew28 Dec 2024 20:15 UTC
9 points
0 comments1 min readLW link

[Question] What’s the best met­ric for mea­sur­ing qual­ity of life?

ChristianKl27 Dec 2024 14:29 UTC
10 points
5 comments1 min readLW link