RSS

So8res(Nate Soares)

Karma: 11,686

A rough and in­com­plete re­view of some of John Went­worth’s research

So8res28 Mar 2023 18:52 UTC
149 points
15 comments18 min readLW link

A stylized di­alogue on John Went­worth’s claims about mar­kets and optimization

So8res25 Mar 2023 22:32 UTC
117 points
15 comments8 min readLW link

Truth and Ad­van­tage: Re­sponse to a draft of “AI safety seems hard to mea­sure”

So8res22 Mar 2023 3:36 UTC
87 points
9 comments5 min readLW link

Deep Deceptiveness

So8res21 Mar 2023 2:51 UTC
195 points
51 comments14 min readLW link

Com­ments on OpenAI’s “Plan­ning for AGI and be­yond”

So8res3 Mar 2023 23:01 UTC
145 points
2 comments14 min readLW link

Ene­mies vs Malefactors

So8res28 Feb 2023 23:38 UTC
194 points
59 comments1 min readLW link

AI al­ign­ment re­searchers don’t (seem to) stack

So8res21 Feb 2023 0:48 UTC
177 points
35 comments3 min readLW link

Hash­ing out long-stand­ing dis­agree­ments seems low-value to me

So8res16 Feb 2023 6:20 UTC
126 points
33 comments4 min readLW link

Fo­cus on the places where you feel shocked ev­ery­one’s drop­ping the ball

So8res2 Feb 2023 0:27 UTC
367 points
56 comments4 min readLW link

What I mean by “al­ign­ment is in large part about mak­ing cog­ni­tion aimable at all”

So8res30 Jan 2023 15:22 UTC
134 points
24 comments2 min readLW link