RSS

Util

Karma: 6

Cal­ibrat­ing in­differ­ence—a small AI safety idea

Util9 Sep 2025 9:32 UTC
4 points
1 comment4 min readLW link

[Question] Clar­ify­ing how mis­al­ign­ment can arise from scal­ing LLMs

Util19 Aug 2023 14:16 UTC
3 points
1 comment1 min readLW link

[Question] Is it cor­rect to frame al­ign­ment as “pro­gram­ming a good philos­o­phy of mean­ing”?

Util7 Apr 2023 23:16 UTC
2 points
3 comments1 min readLW link