Do you know a person who believes that ASI will be created in <50 years who ISN’T in the LW/rationalists circle?
My parents don’t believe that a superintelligent AI will be created within this century, or ever for that matter, or that AI will ever take jobs. My relatives laugh at the idea of AI solving a high school math problem and think state-of-the-art AI is on the level of GPT-2 (I mean that the capabilities they have in mind are on the level of GPT-2, not that they know what GPT-2 is). My friend who is an organic chemist laughs at the idea of AI doing any R&D thinks that while AI can help with some narrow tasks, a truly general AI that can substitute all human researchers is sci-fi. I know 4 people who use Codex/Claude Code; 2 of them call ASI sci-fi bullshit (btw, one of them said that the “Alignment faking in large language models” paper is nonsense after only reading the summary), 1 never said anything about ASI and 1 tentatively acknowledges that maybe ASI is possible to create in theory.
I have never, in my whole life, met a real walking, talking, breathing human being who believes that ASI will be created within this century.
EDIT: obviously there are people on the internet who believe that ASI will be created soon. My point wasn’t to deny their existence, just to share my experience that makes me think “Am I living in a AI-is-a-nothingburger bubble? Am I crazy or is everyone else (whom I personally know) around me crazy?”. I’m wondering if “Everyone I personally know thinks AI is a nothingburger and people who don’t are only found in very specific places on the Internet” is a common experience.
EDIT 2: I asked my organic chemist friend to be more specific and he said that AI will be able to replace 80% of human researchers in 100 years. When asked “What about 100%?”, he said that that will never happen and at least some humans will always be necessary and that the 80% replacement figure will be due to AI automating routine tasks. Basically, when it comes to AI he’s envisioning something more like the Industrial Revolution rather than “humanity’s last invention”.
The “ASI-pilled” part of society is mostly a subset of (1) people working with computers (2) people who read or watch science fiction (3) people who concern themselves with the big picture. LW rationalism is just a sub-subset of that.
Consider Musk, Altman, Amodei, Hassabis. They have all said it’s coming. Are they part of the rationalist circle? Not really. They know about us, they may agree in some areas, but they’ll disagree in others and their personal philosophical and social networks are not centered here. The same would apply to most of their employees, to various intellectuals and public figures who have said it’s coming, all the way down to the scattered private individuals who picked up the idea from who knows where.
Search X and Reddit for conversations about ASI, and you should find people talking about it who have no connection to this place (or even have a negative view of LW’s doomer take on ASI).
I consider TESCREAL to be pointing at a real social cluster, but the Gebru-Bender-Torres cluster’s reporting of it is so off-base that they borderline don’t deserve any engagement.
Most people in the PauseAI movement are not in the LW/rationalist circle. Some joined as rationalists or EAs, especially early on, but today most are normies (or were when they joined, anyway).
I personally found LessWrong and the forecasting community through AI Safety, not the reverse. I now organize for PauseAI Phoenix, a local group of PauseAI US. I have face-to-face conversations on a regular basis with people who believe ASI will be created within 2-20 years unless we prevent that from happening.
My experience is extremely different from yours. I think almost all the non-[rat/EA] people in my life whose positions on this I know consider it plausible that an AI substantially smarter than any human will be created this century.[1] Thinking of the set of non-[rat/EA] friends/[close-ish acquaintances] I haven’t discussed this topic with yet, my guess is that more than half of them already think this and almost all of them would think this after a 2 hour conversation with me. It’s probably important that my distribution skews very high iq (maybe importantly both quant and verbal)[2] and high openness.[3]
like, these are mostly people I know from the international olympiad circuit, math and physics majors from my MIT undergrad, and classmates from the best high school in Estonia
Some of them deferring to me partly on the question is probably also doing some work tbh, but I think this isn’t a big enough effect to change the broad strokes conditional on getting them to consider the hypothesis at all.
I mean, do you count people who got convinced by people in the LW/rationalists circle?
If so, you would have many examples. I don’t know the timelines of Brad Sherman, Neil deGrasse Tyson, Bernie Sanders, and similar “outsiders” who have been waving IABIED, but surely some of them think it’s plausible less than 50 years.
I don’t know how deeply “in the circle” I am. I suspect that many of my coworkers are even less so than I am (but haven’t really asked). There’s wide agreement in that group that AGI is coming relatively soon. There’s no agreement on ASI, either on definition or timeline or impact. The most common belief is that some aspects will surpass human capabilities, but uncertain when (or if) the infrastructure for continuous learning/adaptation and long-term integrated preferences will appear.
To zoom out a bit, from the post I assume you benchmark ASI mostly by “replacing humans 100% in all jobs”. Curious in why you specifically care about absolutely 100%? (Replacing 50% of humans is still significant imo.)
My wife but that’s kind of cheating as even though she’s not in the circle directly she gets a lot of her info on this subject/advice on how to use it from me.
My friend who is an organic chemist laughs at the idea of AI doing any R&D.
That seems very strange, given the extremely high profile of things like AlphaFold. There’s no way he hasn’t heard of it, so what did he say about it when talking with you about AI?
He thinks it’s a cool narrow tool, but not an indication that it’s possible to create one AI that surpasses all humans at everything, including asking questions that humans never asked before. I guess I misrepresented his opinion somewhat (I just edited my quick take). He thinks AI can help with some narrow tasks, but human touch will always be necessary for other things, especially for open-ended research. Btw, he’s not concerned about losing his job.
Do you know a person who believes that ASI will be created in <50 years who ISN’T in the LW/rationalists circle?
My parents don’t believe that a superintelligent AI will be created within this century, or ever for that matter, or that AI will ever take jobs. My relatives laugh at the idea of AI solving a high school math problem and think state-of-the-art AI is on the level of GPT-2 (I mean that the capabilities they have in mind are on the level of GPT-2, not that they know what GPT-2 is). My friend who is an organic chemist
laughs at the idea of AI doing any R&Dthinks that while AI can help with some narrow tasks, a truly general AI that can substitute all human researchers is sci-fi. I know 4 people who use Codex/Claude Code; 2 of them call ASI sci-fi bullshit (btw, one of them said that the “Alignment faking in large language models” paper is nonsense after only reading the summary), 1 never said anything about ASI and 1 tentatively acknowledges that maybe ASI is possible to create in theory.I have never, in my whole life, met a real walking, talking, breathing human being who believes that ASI will be created within this century.
EDIT: obviously there are people on the internet who believe that ASI will be created soon. My point wasn’t to deny their existence, just to share my experience that makes me think “Am I living in a AI-is-a-nothingburger bubble? Am I crazy or is everyone else (whom I personally know) around me crazy?”. I’m wondering if “Everyone I personally know thinks AI is a nothingburger and people who don’t are only found in very specific places on the Internet” is a common experience.
EDIT 2: I asked my organic chemist friend to be more specific and he said that AI will be able to replace 80% of human researchers in 100 years. When asked “What about 100%?”, he said that that will never happen and at least some humans will always be necessary and that the 80% replacement figure will be due to AI automating routine tasks. Basically, when it comes to AI he’s envisioning something more like the Industrial Revolution rather than “humanity’s last invention”.
The “ASI-pilled” part of society is mostly a subset of (1) people working with computers (2) people who read or watch science fiction (3) people who concern themselves with the big picture. LW rationalism is just a sub-subset of that.
Consider Musk, Altman, Amodei, Hassabis. They have all said it’s coming. Are they part of the rationalist circle? Not really. They know about us, they may agree in some areas, but they’ll disagree in others and their personal philosophical and social networks are not centered here. The same would apply to most of their employees, to various intellectuals and public figures who have said it’s coming, all the way down to the scattered private individuals who picked up the idea from who knows where.
Search X and Reddit for conversations about ASI, and you should find people talking about it who have no connection to this place (or even have a negative view of LW’s doomer take on ASI).
The natural retreat/response[1] to this would be
I don’t mean to derogate the response by calling it a “retreat”. It’s a reasonable weakening of the hypothesis/question.
I consider TESCREAL to be pointing at a real social cluster, but the Gebru-Bender-Torres cluster’s reporting of it is so off-base that they borderline don’t deserve any engagement.
Most people in the PauseAI movement are not in the LW/rationalist circle. Some joined as rationalists or EAs, especially early on, but today most are normies (or were when they joined, anyway).
I personally found LessWrong and the forecasting community through AI Safety, not the reverse. I now organize for PauseAI Phoenix, a local group of PauseAI US. I have face-to-face conversations on a regular basis with people who believe ASI will be created within 2-20 years unless we prevent that from happening.
My experience is extremely different from yours. I think almost all the non-[rat/EA] people in my life whose positions on this I know consider it plausible that an AI substantially smarter than any human will be created this century. [1] Thinking of the set of non-[rat/EA] friends/[close-ish acquaintances] I haven’t discussed this topic with yet, my guess is that more than half of them already think this and almost all of them would think this after a 2 hour conversation with me. It’s probably important that my distribution skews very high iq (maybe importantly both quant and verbal) [2] and high openness. [3]
this includes e.g. the 4 family members I’ve discussed this topic with
like, these are mostly people I know from the international olympiad circuit, math and physics majors from my MIT undergrad, and classmates from the best high school in Estonia
Some of them deferring to me partly on the question is probably also doing some work tbh, but I think this isn’t a big enough effect to change the broad strokes conditional on getting them to consider the hypothesis at all.
I mean, do you count people who got convinced by people in the LW/rationalists circle?
If so, you would have many examples. I don’t know the timelines of Brad Sherman, Neil deGrasse Tyson, Bernie Sanders, and similar “outsiders” who have been waving IABIED, but surely some of them think it’s plausible less than 50 years.
I don’t know how deeply “in the circle” I am. I suspect that many of my coworkers are even less so than I am (but haven’t really asked). There’s wide agreement in that group that AGI is coming relatively soon. There’s no agreement on ASI, either on definition or timeline or impact. The most common belief is that some aspects will surpass human capabilities, but uncertain when (or if) the infrastructure for continuous learning/adaptation and long-term integrated preferences will appear.
To zoom out a bit, from the post I assume you benchmark ASI mostly by “replacing humans 100% in all jobs”. Curious in why you specifically care about absolutely 100%? (Replacing 50% of humans is still significant imo.)
My wife but that’s kind of cheating as even though she’s not in the circle directly she gets a lot of her info on this subject/advice on how to use it from me.
That seems very strange, given the extremely high profile of things like AlphaFold. There’s no way he hasn’t heard of it, so what did he say about it when talking with you about AI?
He thinks it’s a cool narrow tool, but not an indication that it’s possible to create one AI that surpasses all humans at everything, including asking questions that humans never asked before. I guess I misrepresented his opinion somewhat (I just edited my quick take). He thinks AI can help with some narrow tasks, but human touch will always be necessary for other things, especially for open-ended research. Btw, he’s not concerned about losing his job.