Karma: 66

How large of an army could you make with the first ‘hu­man-level’ AGIs?

Josh23 Jun 2022 23:57 UTC
23 points
6 comments7 min readLW link

Crys­tal­iz­ing an agent’s ob­jec­tive: how in­ner-mis­al­ign­ment could work in our favor

Josh16 Jun 2022 3:30 UTC
10 points
9 comments4 min readLW link

Op­ti­miza­tion power as di­ver­gence from de­fault trajectories

Josh15 Jun 2022 21:50 UTC
9 points
2 comments5 min readLW link