RSS

[Question] Is there a Schel­ling point for group house room list­ings?

NoSignalNoNoise23 Jul 2024 3:03 UTC
4 points
0 comments1 min readLW link

[Question] Would a scope-in­sen­si­tive AGI be less likely to in­ca­pac­i­tate hu­man­ity?

Jim Buhler21 Jul 2024 14:15 UTC
2 points
3 comments1 min readLW link

[Question] Have peo­ple given up on iter­ated dis­til­la­tion and am­plifi­ca­tion?

Chris_Leong19 Jul 2024 12:23 UTC
19 points
1 comment1 min readLW link

[Question] What are the ac­tual ar­gu­ments in fa­vor of com­pu­ta­tion­al­ism as a the­ory of iden­tity?

sunwillrise18 Jul 2024 18:44 UTC
12 points
24 comments5 min readLW link

[Question] Me & My Clone

SimonBaars18 Jul 2024 16:25 UTC
25 points
19 comments1 min readLW link

[Question] Should we ex­clude al­ign­ment re­search from LLM train­ing datasets?

Ben Millwood18 Jul 2024 10:27 UTC
1 point
0 comments1 min readLW link

[Question] Opinions on Eureka Labs

jmh17 Jul 2024 0:16 UTC
6 points
3 comments1 min readLW link

[Question] Seek­ing feed­back on a cri­tique of the pa­per­clip max­i­mizer thought experiment

bio neural15 Jul 2024 18:39 UTC
3 points
9 comments1 min readLW link

[Question] What Other Lines of Work are Safe from AI Au­toma­tion?

RogerDearnaley11 Jul 2024 10:01 UTC
27 points
33 comments5 min readLW link

[Question] Pon­der­ing how good or bad things will be in the AGI future

Sherrinford9 Jul 2024 22:46 UTC
11 points
9 comments2 min readLW link

[Question] If AI starts to end the world, is suicide a good idea?

IlluminateReality9 Jul 2024 21:53 UTC
0 points
8 comments1 min readLW link

[Question] How bad would AI progress need to be for us to think gen­eral tech­nolog­i­cal progress is also bad?

Jim Buhler9 Jul 2024 10:43 UTC
9 points
5 comments1 min readLW link

[Question] Can agents co­or­di­nate on ran­dom­ness with­out out­side sources?

Mikhail Samin6 Jul 2024 13:43 UTC
5 points
16 comments1 min readLW link

[Question] What progress have we made on au­to­mated au­dit­ing?

LawrenceC6 Jul 2024 1:49 UTC
37 points
1 comment1 min readLW link

[Question] Are there any plans to launch a pa­per­back ver­sion of “Ra­tion­al­ity: From AI to Zom­bies”?

m_arj5 Jul 2024 11:14 UTC
2 points
1 comment1 min readLW link

[Question] What per­cent of the sun would a Dyson Sphere cover?

Raemon3 Jul 2024 17:27 UTC
24 points
26 comments1 min readLW link

[Question] Iso­mor­phisms don’t pre­serve sub­jec­tive ex­pe­rience… right?

notfnofn3 Jul 2024 14:22 UTC
4 points
26 comments1 min readLW link

[Question] Why Can’t Sub-AGI Solve AI Align­ment? Or: Why Would Sub-AGI AI Not be Aligned?

MrThink2 Jul 2024 20:13 UTC
4 points
23 comments1 min readLW link

[Question] Why haven’t there been as­sas­si­na­tion at­tempts against high pro­file AI ac­cel­er­a­tionists like sam alt­man yet?

louisTrem2 Jul 2024 18:16 UTC
−13 points
4 comments2 min readLW link

[Question] Self-cen­sor­ing on AI x-risk dis­cus­sions?

Decaeneus1 Jul 2024 18:24 UTC
17 points
2 comments1 min readLW link