This, in so many words was likely one of the biggest factors why Anthropic/OpenAI/Deepmind were so much more successful than any LW person or group at AI safety until maybe 2021 at the earliest, and even then the lead shifted.
A lot of AI safety proposals before deep learning were basically useless because of the feedback loop issue.
I think this is also connected to ambition issues, but even then lack of feedback loops was way worse for LW than they thought it was.
It’s also why longtermism should be bounded in practice, and a very severe bound at that.
Edit: The comment that johnswentworth made is pointing indirectly at a huge problem that affects LW, and why I’m not inclined to treat arguments for doom seriously anymore, in that there are no feedback loops of any kind, with the exceptions of the AI companies.
This, in so many words was likely one of the biggest factors why Anthropic/OpenAI/Deepmind were so much more successful than any LW person or group at AI safety until maybe 2021 at the earliest, and even then the lead shifted.
A lot of AI safety proposals before deep learning were basically useless because of the feedback loop issue.
I think this is also connected to ambition issues, but even then lack of feedback loops was way worse for LW than they thought it was.
It’s also why longtermism should be bounded in practice, and a very severe bound at that.
Edit: The comment that johnswentworth made is pointing indirectly at a huge problem that affects LW, and why I’m not inclined to treat arguments for doom seriously anymore, in that there are no feedback loops of any kind, with the exceptions of the AI companies.