Fair enough. My judgement is obviously different, in that I want other people to freak out (well, I don’t actually want them to be anxious and fearful, but I can’t control that) in that I want people to realize what I think is happening and, if they agree, take short term actions that may buy us more time to do critical safety work. For example, now is a great time in my view to coordinate on a capabilities pause (though yes I know there are many concerns with pauses and that’s a separate debate).
My judgement is obviously different, in that I want other people to freak out (well, I don’t actually want them to be anxious and fearful, but I can’t control that) in that I want people to realize what I think is happening and, if they agree, take short term actions that may buy us more time to do critical safety work.
Already said this, but want to repeat that I think the perspective is totally valid.
But in terms of the cold consequentialist calculus, I don’t know how you get to the “more alarming is a good idea” result. Maybe I’m biased because 2⁄2 cases I know well (myself and the friend I mentioned) low-key left the platform because the constant reminders are so crippling for mental health. I don’t have a survey on how bad other people feel. But my impression is that I see a post about AI acceleration more than half the time I look at the frontpage. Valentine wrote Here’s the Exitliterally over three years ago! It was already so bad back then that people contemplated leaving the community over it. And it’s been going on ever since.
I genuinely believe that even if your utility function has zero terms in it other than maximizing useful AI interventions (whether policy or technical safety work, or anything else), you should want fewer posts like this. Everyone got the memo that it’s time to panic. I think the awareness-of-how-bad-it-is curve would have plateaued even if there were one fifth as many posts like this one, and the marginal effect of every other post is just to make people freak out more.
I appreciate your replies, Gordon, and your saying (or implying) that this post does indeed boil down to vibes (“feeling the AGI hard”) and not some unassailable pronouncement. Given that, I do wonder, like Rafael, if the bombastic title “AGI is Here” is overstated and will lead many to undue anxiety. (Or due anxiety, but not actionable anxiety.)
I understand that since you believe full-strength AGI is near at hand, you believe it is meaningful and useful to overstate the present state of things a bit. So I wonder: what are people to do about this? Of course the whole society is grasping for the answer to this, and we cannot know it. Since you have been in this space for a long time (and I am not in it at all) I’d be curious to hear your thoughts in a later post.
You say your audience is other techies who are deep in this stuff, in agentic AI. I guess your message to them is something like, “Make sure you skill up so you are not left behind like all novice programmers certainly will be.”
What would you advise non-techies (including former techies, like me) who are cautiously wowed by LLMs but are not especially worried about AGI to do with the alarm that you believe you rightly sound? I do not study AI safety or pay attention to the ongoing speculations about AGI, whatever that concept means. I do not frequent this site. I only made an account and commented here because a good friend who does frequent the site (and, like me, does not use agentic tools) sent me this post with an implied oh shit.
If non-specialists (my friend and I) are not the audience you want to engage, my apologies, and feel free to ignore my post. And I am sorry if my original post was not too productive. It’s just that this week there was a lot of fear pervading the waters of the web. I’m trying to understand why, and I had hoped, given the title of this post, to find something more than that someone deep in this stuff is experiencing anxieties that “most people [in his audience] can simply see for themselves.” In that case, I wonder why such a post is needed, other than for the audience to feel less alone: “Hey, I guess I’m not the only one with this deep worry.”
Fair enough. My judgement is obviously different, in that I want other people to freak out (well, I don’t actually want them to be anxious and fearful, but I can’t control that) in that I want people to realize what I think is happening and, if they agree, take short term actions that may buy us more time to do critical safety work. For example, now is a great time in my view to coordinate on a capabilities pause (though yes I know there are many concerns with pauses and that’s a separate debate).
Already said this, but want to repeat that I think the perspective is totally valid.
But in terms of the cold consequentialist calculus, I don’t know how you get to the “more alarming is a good idea” result. Maybe I’m biased because 2⁄2 cases I know well (myself and the friend I mentioned) low-key left the platform because the constant reminders are so crippling for mental health. I don’t have a survey on how bad other people feel. But my impression is that I see a post about AI acceleration more than half the time I look at the frontpage. Valentine wrote Here’s the Exit literally over three years ago! It was already so bad back then that people contemplated leaving the community over it. And it’s been going on ever since.
I genuinely believe that even if your utility function has zero terms in it other than maximizing useful AI interventions (whether policy or technical safety work, or anything else), you should want fewer posts like this. Everyone got the memo that it’s time to panic. I think the awareness-of-how-bad-it-is curve would have plateaued even if there were one fifth as many posts like this one, and the marginal effect of every other post is just to make people freak out more.
I appreciate your replies, Gordon, and your saying (or implying) that this post does indeed boil down to vibes (“feeling the AGI hard”) and not some unassailable pronouncement. Given that, I do wonder, like Rafael, if the bombastic title “AGI is Here” is overstated and will lead many to undue anxiety. (Or due anxiety, but not actionable anxiety.)
I understand that since you believe full-strength AGI is near at hand, you believe it is meaningful and useful to overstate the present state of things a bit. So I wonder: what are people to do about this? Of course the whole society is grasping for the answer to this, and we cannot know it. Since you have been in this space for a long time (and I am not in it at all) I’d be curious to hear your thoughts in a later post.
You say your audience is other techies who are deep in this stuff, in agentic AI. I guess your message to them is something like, “Make sure you skill up so you are not left behind like all novice programmers certainly will be.”
What would you advise non-techies (including former techies, like me) who are cautiously wowed by LLMs but are not especially worried about AGI to do with the alarm that you believe you rightly sound? I do not study AI safety or pay attention to the ongoing speculations about AGI, whatever that concept means. I do not frequent this site. I only made an account and commented here because a good friend who does frequent the site (and, like me, does not use agentic tools) sent me this post with an implied oh shit.
If non-specialists (my friend and I) are not the audience you want to engage, my apologies, and feel free to ignore my post. And I am sorry if my original post was not too productive. It’s just that this week there was a lot of fear pervading the waters of the web. I’m trying to understand why, and I had hoped, given the title of this post, to find something more than that someone deep in this stuff is experiencing anxieties that “most people [in his audience] can simply see for themselves.” In that case, I wonder why such a post is needed, other than for the audience to feel less alone: “Hey, I guess I’m not the only one with this deep worry.”
Promised follow-up post is live: https://www.lesswrong.com/posts/bj6ffpD6Jzid6vFa8/what-to-do-about-agi
I will try to write a follow up post with my suggestions.