Strongly agree that it correlates. It’s just not quite the same as most people agreeing with you. It prob measures something more like a lot of people strongly agreeing with you, with it mattering less how many people disagree with you. Like if on topic A, agreement/disagreement is 50⁄50, and on topic B it’s 90⁄10, but for A people feel super strongly and B people are mostly indifferent, then A might get rapid upvotes whereas B would just disappear from visibility immediately.
Anyway, much more importantly:
That said, I wrote this post mostly because I’m freaking out about the state of things and this is my best attempt to crystallize the source of that freak out. I’m not actually trying to make a rigorous argument about what counts as AGI, and I don’t even know if making a rigorous argument is worth it. You should probably read this post more as “Gordon is saying that he’s feeling the AGI hard, and you should do with the information about his judgement as a 25 year veteran of AGI discourse what you will.”
Big props for saying that! And this is actually one of the reasons why I perceive these posts as harmful, I think a lot of people are freaking out, this kind of post contributes to more people freaking out, and bc I think AGI isn’t very close, to me that just seems like a big net negative. But of course that depends entirely on your beliefs; if AGI were close, maybe it would be correct to freak out. This is what always makes it difficult to know what to say about posts like this; like to me the entire pre-apocalypse vibe is a huge negative for the overall utility of lesswrong, but I think people genuinely believe it, so unclear to what extent I should/can push back.
Fwiw my ire is much more directed at people upvoting this than at you for writing it. This seems to me like a pretty clear case: even if the narrative were 100% true, I don’t see how continuously broadcasting the vibe is a good idea. If I were an alignment researcher, I doubt it’d be good for my productivity. In fact I just talked to someone a few weeks ago on Discord who told me they don’t check LW much anymore for mental health reasons even though they still completely believe in the narrative and are trying to work on it. (They even deleted their account, which I thought was a bad idea.)
Fair enough. My judgement is obviously different, in that I want other people to freak out (well, I don’t actually want them to be anxious and fearful, but I can’t control that) in that I want people to realize what I think is happening and, if they agree, take short term actions that may buy us more time to do critical safety work. For example, now is a great time in my view to coordinate on a capabilities pause (though yes I know there are many concerns with pauses and that’s a separate debate).
My judgement is obviously different, in that I want other people to freak out (well, I don’t actually want them to be anxious and fearful, but I can’t control that) in that I want people to realize what I think is happening and, if they agree, take short term actions that may buy us more time to do critical safety work.
Already said this, but want to repeat that I think the perspective is totally valid.
But in terms of the cold consequentialist calculus, I don’t know how you get to the “more alarming is a good idea” result. Maybe I’m biased because 2⁄2 cases I know well (myself and the friend I mentioned) low-key left the platform because the constant reminders are so crippling for mental health. I don’t have a survey on how bad other people feel. But my impression is that I see a post about AI acceleration more than half the time I look at the frontpage. Valentine wrote Here’s the Exitliterally over three years ago! It was already so bad back then that people contemplated leaving the community over it. And it’s been going on ever since.
I genuinely believe that even if your utility function has zero terms in it other than maximizing useful AI interventions (whether policy or technical safety work, or anything else), you should want fewer posts like this. Everyone got the memo that it’s time to panic. I think the awareness-of-how-bad-it-is curve would have plateaued even if there were one fifth as many posts like this one, and the marginal effect of every other post is just to make people freak out more.
I appreciate your replies, Gordon, and your saying (or implying) that this post does indeed boil down to vibes (“feeling the AGI hard”) and not some unassailable pronouncement. Given that, I do wonder, like Rafael, if the bombastic title “AGI is Here” is overstated and will lead many to undue anxiety. (Or due anxiety, but not actionable anxiety.)
I understand that since you believe full-strength AGI is near at hand, you believe it is meaningful and useful to overstate the present state of things a bit. So I wonder: what are people to do about this? Of course the whole society is grasping for the answer to this, and we cannot know it. Since you have been in this space for a long time (and I am not in it at all) I’d be curious to hear your thoughts in a later post.
You say your audience is other techies who are deep in this stuff, in agentic AI. I guess your message to them is something like, “Make sure you skill up so you are not left behind like all novice programmers certainly will be.”
What would you advise non-techies (including former techies, like me) who are cautiously wowed by LLMs but are not especially worried about AGI to do with the alarm that you believe you rightly sound? I do not study AI safety or pay attention to the ongoing speculations about AGI, whatever that concept means. I do not frequent this site. I only made an account and commented here because a good friend who does frequent the site (and, like me, does not use agentic tools) sent me this post with an implied oh shit.
If non-specialists (my friend and I) are not the audience you want to engage, my apologies, and feel free to ignore my post. And I am sorry if my original post was not too productive. It’s just that this week there was a lot of fear pervading the waters of the web. I’m trying to understand why, and I had hoped, given the title of this post, to find something more than that someone deep in this stuff is experiencing anxieties that “most people [in his audience] can simply see for themselves.” In that case, I wonder why such a post is needed, other than for the audience to feel less alone: “Hey, I guess I’m not the only one with this deep worry.”
Strongly agree that it correlates. It’s just not quite the same as most people agreeing with you. It prob measures something more like a lot of people strongly agreeing with you, with it mattering less how many people disagree with you. Like if on topic A, agreement/disagreement is 50⁄50, and on topic B it’s 90⁄10, but for A people feel super strongly and B people are mostly indifferent, then A might get rapid upvotes whereas B would just disappear from visibility immediately.
Anyway, much more importantly:
Big props for saying that! And this is actually one of the reasons why I perceive these posts as harmful, I think a lot of people are freaking out, this kind of post contributes to more people freaking out, and bc I think AGI isn’t very close, to me that just seems like a big net negative. But of course that depends entirely on your beliefs; if AGI were close, maybe it would be correct to freak out. This is what always makes it difficult to know what to say about posts like this; like to me the entire pre-apocalypse vibe is a huge negative for the overall utility of lesswrong, but I think people genuinely believe it, so unclear to what extent I should/can push back.
Fwiw my ire is much more directed at people upvoting this than at you for writing it. This seems to me like a pretty clear case: even if the narrative were 100% true, I don’t see how continuously broadcasting the vibe is a good idea. If I were an alignment researcher, I doubt it’d be good for my productivity. In fact I just talked to someone a few weeks ago on Discord who told me they don’t check LW much anymore for mental health reasons even though they still completely believe in the narrative and are trying to work on it. (They even deleted their account, which I thought was a bad idea.)
Fair enough. My judgement is obviously different, in that I want other people to freak out (well, I don’t actually want them to be anxious and fearful, but I can’t control that) in that I want people to realize what I think is happening and, if they agree, take short term actions that may buy us more time to do critical safety work. For example, now is a great time in my view to coordinate on a capabilities pause (though yes I know there are many concerns with pauses and that’s a separate debate).
Already said this, but want to repeat that I think the perspective is totally valid.
But in terms of the cold consequentialist calculus, I don’t know how you get to the “more alarming is a good idea” result. Maybe I’m biased because 2⁄2 cases I know well (myself and the friend I mentioned) low-key left the platform because the constant reminders are so crippling for mental health. I don’t have a survey on how bad other people feel. But my impression is that I see a post about AI acceleration more than half the time I look at the frontpage. Valentine wrote Here’s the Exit literally over three years ago! It was already so bad back then that people contemplated leaving the community over it. And it’s been going on ever since.
I genuinely believe that even if your utility function has zero terms in it other than maximizing useful AI interventions (whether policy or technical safety work, or anything else), you should want fewer posts like this. Everyone got the memo that it’s time to panic. I think the awareness-of-how-bad-it-is curve would have plateaued even if there were one fifth as many posts like this one, and the marginal effect of every other post is just to make people freak out more.
I appreciate your replies, Gordon, and your saying (or implying) that this post does indeed boil down to vibes (“feeling the AGI hard”) and not some unassailable pronouncement. Given that, I do wonder, like Rafael, if the bombastic title “AGI is Here” is overstated and will lead many to undue anxiety. (Or due anxiety, but not actionable anxiety.)
I understand that since you believe full-strength AGI is near at hand, you believe it is meaningful and useful to overstate the present state of things a bit. So I wonder: what are people to do about this? Of course the whole society is grasping for the answer to this, and we cannot know it. Since you have been in this space for a long time (and I am not in it at all) I’d be curious to hear your thoughts in a later post.
You say your audience is other techies who are deep in this stuff, in agentic AI. I guess your message to them is something like, “Make sure you skill up so you are not left behind like all novice programmers certainly will be.”
What would you advise non-techies (including former techies, like me) who are cautiously wowed by LLMs but are not especially worried about AGI to do with the alarm that you believe you rightly sound? I do not study AI safety or pay attention to the ongoing speculations about AGI, whatever that concept means. I do not frequent this site. I only made an account and commented here because a good friend who does frequent the site (and, like me, does not use agentic tools) sent me this post with an implied oh shit.
If non-specialists (my friend and I) are not the audience you want to engage, my apologies, and feel free to ignore my post. And I am sorry if my original post was not too productive. It’s just that this week there was a lot of fear pervading the waters of the web. I’m trying to understand why, and I had hoped, given the title of this post, to find something more than that someone deep in this stuff is experiencing anxieties that “most people [in his audience] can simply see for themselves.” In that case, I wonder why such a post is needed, other than for the audience to feel less alone: “Hey, I guess I’m not the only one with this deep worry.”
Promised follow-up post is live: https://www.lesswrong.com/posts/bj6ffpD6Jzid6vFa8/what-to-do-about-agi
I will try to write a follow up post with my suggestions.