My judgement is obviously different, in that I want other people to freak out (well, I don’t actually want them to be anxious and fearful, but I can’t control that) in that I want people to realize what I think is happening and, if they agree, take short term actions that may buy us more time to do critical safety work.
Already said this, but want to repeat that I think the perspective is totally valid.
But in terms of the cold consequentialist calculus, I don’t know how you get to the “more alarming is a good idea” result. Maybe I’m biased because 2⁄2 cases I know well (myself and the friend I mentioned) low-key left the platform because the constant reminders are so crippling for mental health. I don’t have a survey on how bad other people feel. But my impression is that I see a post about AI acceleration more than half the time I look at the frontpage. Valentine wrote Here’s the Exitliterally over three years ago! It was already so bad back then that people contemplated leaving the community over it. And it’s been going on ever since.
I genuinely believe that even if your utility function has zero terms in it other than maximizing useful AI interventions (whether policy or technical safety work, or anything else), you should want fewer posts like this. Everyone got the memo that it’s time to panic. I think the awareness-of-how-bad-it-is curve would have plateaued even if there were one fifth as many posts like this one, and the marginal effect of every other post is just to make people freak out more.
Already said this, but want to repeat that I think the perspective is totally valid.
But in terms of the cold consequentialist calculus, I don’t know how you get to the “more alarming is a good idea” result. Maybe I’m biased because 2⁄2 cases I know well (myself and the friend I mentioned) low-key left the platform because the constant reminders are so crippling for mental health. I don’t have a survey on how bad other people feel. But my impression is that I see a post about AI acceleration more than half the time I look at the frontpage. Valentine wrote Here’s the Exit literally over three years ago! It was already so bad back then that people contemplated leaving the community over it. And it’s been going on ever since.
I genuinely believe that even if your utility function has zero terms in it other than maximizing useful AI interventions (whether policy or technical safety work, or anything else), you should want fewer posts like this. Everyone got the memo that it’s time to panic. I think the awareness-of-how-bad-it-is curve would have plateaued even if there were one fifth as many posts like this one, and the marginal effect of every other post is just to make people freak out more.