How about we learn to smile while saving the world? Saving the world doesn’t strike me as strictly incompatible with having fun, so let’s do both? :)
The post proposes that LessWrong could tackle alignment in more skillfull ways which is a wholesome thought, but I feel that the post also casts doubt on the project of alignment itself ; I want to push back on that.
It won’t become less important to prevent the creation of harmful technologies in 2028, or in any year for that matter. Timelines and predictions don’t feel super relevant here.
We know that AGI can be dangerous if created without proper understanding and that fact does not change with time or timelines, so LW should still aim for:
An international framework that restricts AGI creation and ensures safety, just like for other large impact technologies
Alignment research to eventually reap the benefits of aligned AGI, but with less pressure as long as point 1 stands
If the current way of advancing towards the goal is sub-optimal, giving up on the goal is not the only answer, we can also change the way we go about it. Since getting AGI right is important, not giving up and changing the way we go about it seems like the better option (all this predicated on the snob and doommish depictions in the post being accurate).
My best guess is that the degree of doom is exaggerated but not fabricated. The exaggeration matters because if it’s there, it’s warping perception about what to do about the real problem. So if it’s there, it would be ideal to address the cause of the exaggeration, even though on the inside that’s probably always going to feel like the wrong thing to focus on.
In the end, I think a healthy attitude looks more like facing the darkness hand-in-hand with joy in our hearts and music in our throats.
How about we learn to smile while saving the world? Saving the world doesn’t strike me as strictly incompatible with having fun, so let’s do both? :)
The post proposes that LessWrong could tackle alignment in more skillfull ways which is a wholesome thought, but I feel that the post also casts doubt on the project of alignment itself ; I want to push back on that.
It won’t become less important to prevent the creation of harmful technologies in 2028, or in any year for that matter. Timelines and predictions don’t feel super relevant here.
We know that AGI can be dangerous if created without proper understanding and that fact does not change with time or timelines, so LW should still aim for:
An international framework that restricts AGI creation and ensures safety, just like for other large impact technologies
Alignment research to eventually reap the benefits of aligned AGI, but with less pressure as long as point 1 stands
If the current way of advancing towards the goal is sub-optimal, giving up on the goal is not the only answer, we can also change the way we go about it. Since getting AGI right is important, not giving up and changing the way we go about it seems like the better option (all this predicated on the snob and doommish depictions in the post being accurate).
I agree with you.
My best guess is that the degree of doom is exaggerated but not fabricated. The exaggeration matters because if it’s there, it’s warping perception about what to do about the real problem. So if it’s there, it would be ideal to address the cause of the exaggeration, even though on the inside that’s probably always going to feel like the wrong thing to focus on.
In the end, I think a healthy attitude looks more like facing the darkness hand-in-hand with joy in our hearts and music in our throats.