Do you know a person who believes that ASI will be created in <50 years who ISN’T in the LW/rationalists circle?
My parents don’t believe that a superintelligent AI will be created within this century, or ever for that matter, or that AI will ever take jobs. My relatives laugh at the idea of AI solving a high school math problem and think state-of-the-art AI is on the level of GPT-2 (I mean that the capabilities they have in mind are on the level of GPT-2, not that they know what GPT-2 is). My friend who is an organic chemist laughs at the idea of AI doing any R&D thinks that while AI can help with some narrow tasks, a truly general AI that can substitute all human researchers is sci-fi. I know 4 people who use Codex/Claude Code; 2 of them call ASI sci-fi bullshit (btw, one of them said that the “Alignment faking in large language models” paper is nonsense after only reading the summary), 1 never said anything about ASI and 1 tentatively acknowledges that maybe ASI is possible to create in theory.
I have never, in my whole life, met a real walking, talking, breathing human being who believes that ASI will be created within this century.
EDIT: obviously there are people on the internet who believe that ASI will be created soon. My point wasn’t to deny their existence, just to share my experience that makes me think “Am I living in a AI-is-a-nothingburger bubble? Am I crazy or is everyone else (whom I personally know) around me crazy?”. I’m wondering if “Everyone I personally know thinks AI is a nothingburger and people who don’t are only found in very specific places on the Internet” is a common experience.
EDIT 2: I asked my organic chemist friend to be more specific and he said that AI will be able to replace 80% of human researchers in 100 years. When asked “What about 100%?”, he said that that will never happen and at least some humans will always be necessary and that the 80% replacement figure will be due to AI automating routine tasks. Basically, when it comes to AI he’s envisioning something more like the Industrial Revolution rather than “humanity’s last invention”.
He thinks it’s a cool narrow tool, but not an indication that it’s possible to create one AI that surpasses all humans at everything, including asking questions that humans never asked before. I guess I misrepresented his opinion somewhat (I just edited my quick take). He thinks AI can help with some narrow tasks, but human touch will always be necessary for other things, especially for open-ended research. Btw, he’s not concerned about losing his job.