I have been following LessWrong for not that long compared to others, but definitely knew about it for a while, was interested in AI before GPT-3. Have visions of AI future that I don’t really see elsewhere.
I make interactive games (and fiction) on http://icely.itch.io/ all trying to do something innovative and new. (my newest game is an interactive fiction + puzzle game presented in a Discord-like interface)
Beyond the classic rhetorics that can happen with ‘possibilities’, hypotheticals, and very very high values, I definitely don’t resonate with the examples here.
The human world is deeply broken in coordination problems, not even able to fix many human suffering problems, so the path to (non-human) animal suffering might actually be to fix all human suffering first, whereas “therefore we should all be working on animal suffering” does not imply.
The simulation hypothesis implies consciousness can be created from simulation which may not be true at all, the number of books in the world does not actually create any fictional character suffering.
I’m not sure if you find that related to the article, but I guess it just didn’t land for me in any way I can see because the examples I find flawed due to some premise and not because of the rhetoric.