Our community seems to love treating people like mass-produced automatons with a fixed and easily assessable “ability” attribute.
Have you considered the implied contradiction between the “culturally broken community” you describe and the beliefs—derived from that same community—which you espouse below?
I was crying the other night because our light cone is about to get ripped to shreds. I’m gonna do everything I can to do battle against the forces that threaten to destroy us.
Your doom beliefs derive from this “culturally broken community”—you probably did not derive them from first principles yourself. Is it broken in just the right ways to reach the correct conclusion that the world is doomed—even when experts knowledgeable in the relevant subject matters that would actually lead to doom find this laughable?
Consider a counterfactual version of yourself that never read HMPOR, the sequences, and LW during the critical periods of your upper cortical plasticity window, and instead spent all that time reading/studying deep learning, systems neuroscience, etc. Do you think that alternate version of yourself—after then reviewing the arguments that lead you to the conclusion that the “light cone is about to get ripped to shreds”—would reach that same conclusion? Do you think you would you win a debate against that version of yourself? Have you gone out of your way to read the best arguments of those who do not believe “our light cone is about to get ripped to shreds”? The sequences are built on a series of implicit assumptions about reality, some of which are likely incorrect based on recent evidence.
Quoting from [1], a survey of researchers who published at top ML conferences:
The median respondent’s probability of x-risk from humans failing to control AI was 10%
Admittedly, that’s a far cry from “the light cone is about to get ripped to shreds,” but it’s also pretty far from finding those concerns laughable. [Edited to add: another recent survey puts the median estimate of extinction-level extremely bad outcomes at 2%, lower but arguably still not laughable.]
To be clear I also am concerned, but at lower probability levels and mostly not about doom. The laughable part is the specific “our light cone is about to get ripped to shreds” by a paperclipper or the equivalent, because of an overconfident and mostly incorrect EY/LW/MIRI argument involving supposed complexity of value, failure of alignment approaches, fast takeoff, sharp left turn, etc.
I of course agree with Aaro Salosensaari that many of the concerned experts were/are downstream of LW. But this also works the other way to some degree: beliefs about AI risk will influence career decisions, so it’s obviously not surprising that most working on AI capability research think risk is low and those working on AI safety/alignment think the risk is greater.
Hyperbole aside, how many of those experts linked (and/or contributing to the 10% / 2% estimate) have arrived to their conclusion with a thought process that is “downstream” from the thoughtspace the parent commenter thinks suspect? Then it would not qualify as independent evidence or rebuttal, as it is included as the target of criticism.
One specific concern people could have with this thoughtspace is the concern that it’s hard to square with the knowledge that an AI PhD [edit: or rather, AI/ML expertise more broadly] provides. I took this point to be strongly suggested by the author’s suggestions that “experts knowledgeable in the relevant subject matters that would actually lead to doom find this laughable” and that someone who spent their early years “reading/studying deep learning, systems neuroscience, etc.” would not find risk arguments compelling. That’s directly refuted by the surveys (though I agree that some other concerns about this thoughtspace aren’t).
(However, it looks like the author was making a different point to what I first understood.)
That’s not really what I’m saying: it’s more like this community naturally creates nearby phyg-like attractors which take some individually varying effort to avoid. If you don’t have any significant differences of opinion/viewpoint you may already be in the danger zone. There are numerous historical case examples of individuals spiraling too far in, if you know where to look.
Hm? Seems a little pro-social to me to rot-13 things that you don’t want to unfairly show up high in google search. (Though the first time you do it, is good to hyperlink to rot13.com so that everyone who reads can understand what you’re saying.)
About a decade ago people were worried that LW and cults would be associated through search [1] and and started using “phyg” instead. Having a secret ingroup word for “cult” to avoid being associated with it is actually much more culty than a few search results, and I wish we hadn’t done it.
I disagree, I think the rest of the world has built a lot of superweapons around certain terms to the point where you can’t really bring them up. I think it’s a pretty clever strategy to be like “If you come here to search for drama, you cannot get to drama with search terms.” For instance, I think if someone wanted to talk about some dynamics of enpvfz or frkhny zvfpbaqhpg in a community, they might be able to make a far less charged and tense discussion if everyone collectively agreed to put down the weapon “I might use a term while criticizing your behavior that means whenever a random person wants to look for dirt on you they can google this and use it to try to get you fired”.
Taking this line of reasoning one step further, it seems plausible to me that it’s pro social to do this sometimes anyway, just to create plausible deniability about why you’re doing it, similar to why it’s good to use Signal for lots of communications.
Have you considered the implied contradiction between the “culturally broken community” you describe and the beliefs—derived from that same community—which you espouse below?
Your doom beliefs derive from this “culturally broken community”—you probably did not derive them from first principles yourself. Is it broken in just the right ways to reach the correct conclusion that the world is doomed—even when experts knowledgeable in the relevant subject matters that would actually lead to doom find this laughable?
Consider a counterfactual version of yourself that never read HMPOR, the sequences, and LW during the critical periods of your upper cortical plasticity window, and instead spent all that time reading/studying deep learning, systems neuroscience, etc. Do you think that alternate version of yourself—after then reviewing the arguments that lead you to the conclusion that the “light cone is about to get ripped to shreds”—would reach that same conclusion? Do you think you would you win a debate against that version of yourself? Have you gone out of your way to read the best arguments of those who do not believe “our light cone is about to get ripped to shreds”? The sequences are built on a series of implicit assumptions about reality, some of which are likely incorrect based on recent evidence.
This seems overstated; plenty of AI/ML experts are concerned. [1] [2] [3] [4] [5] [6] [7] [8] [9]
Quoting from [1], a survey of researchers who published at top ML conferences:
Admittedly, that’s a far cry from “the light cone is about to get ripped to shreds,” but it’s also pretty far from finding those concerns laughable. [Edited to add: another recent survey puts the median estimate of extinction-level extremely bad outcomes at 2%, lower but arguably still not laughable.]
To be clear I also am concerned, but at lower probability levels and mostly not about doom. The laughable part is the specific “our light cone is about to get ripped to shreds” by a paperclipper or the equivalent, because of an overconfident and mostly incorrect EY/LW/MIRI argument involving supposed complexity of value, failure of alignment approaches, fast takeoff, sharp left turn, etc.
I of course agree with Aaro Salosensaari that many of the concerned experts were/are downstream of LW. But this also works the other way to some degree: beliefs about AI risk will influence career decisions, so it’s obviously not surprising that most working on AI capability research think risk is low and those working on AI safety/alignment think the risk is greater.
Hyperbole aside, how many of those experts linked (and/or contributing to the 10% / 2% estimate) have arrived to their conclusion with a thought process that is “downstream” from the thoughtspace the parent commenter thinks suspect? Then it would not qualify as independent evidence or rebuttal, as it is included as the target of criticism.
One specific concern people could have with this thoughtspace is the concern that it’s hard to square with the knowledge that an AI PhD [edit: or rather, AI/ML expertise more broadly] provides. I took this point to be strongly suggested by the author’s suggestions that “experts knowledgeable in the relevant subject matters that would actually lead to doom find this laughable” and that someone who spent their early years “reading/studying deep learning, systems neuroscience, etc.” would not find risk arguments compelling. That’s directly refuted by the surveys (though I agree that some other concerns about this thoughtspace aren’t).
(However, it looks like the author was making a different point to what I first understood.)
I really don’t want to entertain this “you’re in a cult” stuff.
It’s not very relevant to the post, and it’s not very intellectually engaging either. I’ve dedicated enough cycles to this stuff.
That’s not really what I’m saying: it’s more like this community naturally creates nearby phyg-like attractors which take some individually varying effort to avoid. If you don’t have any significant differences of opinion/viewpoint you may already be in the danger zone. There are numerous historical case examples of individuals spiraling too far in, if you know where to look.
If you want to talk about cults, just say “cult”.
Hm? Seems a little pro-social to me to rot-13 things that you don’t want to unfairly show up high in google search. (Though the first time you do it, is good to hyperlink to rot13.com so that everyone who reads can understand what you’re saying.)
About a decade ago people were worried that LW and cults would be associated through search [1] and and started using “phyg” instead. Having a secret ingroup word for “cult” to avoid being associated with it is actually much more culty than a few search results, and I wish we hadn’t done it.
[1] https://www.lesswrong.com/posts/hxGEKxaHZEKT4fpms/our-phyg-is-not-exclusive-enough?commentId=4mSRMZxmopEj6NyrQ
I disagree, I think the rest of the world has built a lot of superweapons around certain terms to the point where you can’t really bring them up. I think it’s a pretty clever strategy to be like “If you come here to search for drama, you cannot get to drama with search terms.” For instance, I think if someone wanted to talk about some dynamics of enpvfz or frkhny zvfpbaqhpg in a community, they might be able to make a far less charged and tense discussion if everyone collectively agreed to put down the weapon “I might use a term while criticizing your behavior that means whenever a random person wants to look for dirt on you they can google this and use it to try to get you fired”.
Taking this line of reasoning one step further, it seems plausible to me that it’s pro social to do this sometimes anyway, just to create plausible deniability about why you’re doing it, similar to why it’s good to use Signal for lots of communications.
I thought they were calling me a flying Minecraft pig https://aether.fandom.com/wiki/Phyg
Well, let me be the first to say that I don’t think you’re a passive mob that can be found in the aether.