I don’t think LessWrong is the right venue for this, or really, I don’t want it to become that venue. I basically agree with Steven Byrnes here, but generalized to also include rallying others to agree to stuff. I think its reasonable to make a post arguing people should publicly declare their stances on such things.
I’m not sure what we disagree about here? This current post is a place where people can publicly declare their stances. Ideally it would have a poll or aggregation feature in the main post, because I’m not even asking people to put their names on their declaration, I’m happy to just have the community infrastructure aggregate a high fraction of people’s existing views into mutual knowledge.
I’m not clear whether you think your argument generalizes to claiming the CAIS statement was skirting too close to group-think too?
What I liked about the CAIS statement was that it enabled people to know that top academics and industry leaders were significantly agreeing on an extremely important belief. Without that kind of mutual-knowledge-building statement, the natural course of events is that it’s actually quite hard to tell what people think about the most basic claims. Even today, I find myself constantly pointing to the CAIS letter when people on the internet claim that AI x-risk is a totally unserious belief.
I think the situation we find ourselves on LW is that the community hasn’t been polled or aggregated in a meaningful way, on a belief that seems to me about as consensus to us as the CAIS statement was to experts in AI.
The strongest part of the argument to me is that “LW isn’t the place for it”. Fair enough, LW can be whatever people want… I just think it’s sad that no existing place was ever formed for this important missing piece of community mutual-knowledge building, except the place founded by the guy and the idea that currently could have used the support.
Now that political game may be necessary and useful, but I’d rather LessWrong not be a battlefield for such games, and remain the place everyone can go to to think, not fight.
Aggregating people’s existing belief-states into mutual knowledge seems like good “thinking” to me—part of the infrastructure that makes thinking more effective at climbing the tree of knowledge.
I’m not clear whether you think your argument generalizes to claiming the CAIS statement was skirting too close to group-think too?
What did the CAIS letter do?
What I liked about the CAIS statement was that it enabled people to know that top academics and industry leaders were significantly agreeing on an extremely important belief. Without that kind of mutual-knowledge-building statement, the natural course of events is that it’s actually quite hard to tell what people think about the most basic claims. Even today, I find myself constantly pointing to the CAIS letter when people on the internet claim that AI x-risk is a totally unserious belief.
Restating in my own words, it got a bunch of well-respected, legibly smart people in AI and other areas to sign a letter saying they think AI risk is important. This is clearly very useful for people who don’t know anything about AI, since they must defer to some community to tell them about AI, and the CAIS letter was targeting the community society delegates with that task. It worked because the general public cared what the people who signed the letter had to say, and so then people changed their minds.
This statement of support is (as you say) to be used similarly to the CAIS letter. That is, to convince people who care what LessWrongers have to say that AI risk is a big deal. But the only people who care what LessWrongers have to say are other LessWrongers!
This is where the group think comes in, and this is what makes your statement of support both much more useless and much more harmful than the CAIS letter. Suppose your plan works. That there are LessWrongers who get convinced by the number of LessWrongers who have signed your statement of support. Then they will sign the statement, but then that will make the statement more convincing, and more people will sign, and so on. That is the exact dynamic which causes group think to occur.
I’m not saying this is likely, but 1) There are degrees of group think, and even if you don’t have a run-away explosion of group think, you can cause more or less of it, and this definitely seems like it’d cause more group think, and 2) This is your plan, this is how the CAIS letter worked. If this sequence of events is unlikely, then your plan is unlikely to even work.
Aggregating people’s existing belief-states into mutual knowledge seems like good “thinking” to me—part of the infrastructure that makes thinking more effective at climbing the tree of knowledge.
Not like this, this is not how unbiased, truth-seeking surveys look!
The only people who care what LessWrongers have to say are other LessWrongers!
I disagree with that premise. The goal of LessWrong, as I understand it, is to lead the world on having correct opinions about important topics. I would never assume away the possibility of that goal.
Well then you’re wrong on both counts, as well as the reasoning you’re using to derive the second count from the first.
LessWrong is not about leading the world, indeed note that there is a literal about page with a “What LessWrong is About” section, which of course says that LessWrong is about “apply[ing] the rationality lessons we’ve accumulated to any topic that interests us”, with not a lick of mention about leadership or even communities other than LessWrong except to say “Right now, AI seems like one of the most (or the most) important topics for humanity”, which does not sound like LessWrong is trying for a leadership role on the subject (though individual LessWrongers may).
Second, even if it were true that LessWrong’s goal is to have others care what LessWrong thinks, we do not have any guarantee LessWrong is succeeding at that goal. so your inference is simply invalid and frankly absurd.
Edit: Perhaps this is where we disagree too, about the purpose of LessWrong, somewhere along the line you got the mistaken impression that LessWrong would like to try leading.
I’m happy to agree on the crux that if one accepts “the only people who care what LessWrongers have to say are other LessWrongers” (which I currently don’t), then that would weaken the case for mutual knowledge — I would say by about half. The other half of my claim is that building mutual knowledge benefits other LessWrongers.
I have argued both that your argument for why “The goal of LessWrong [...] is to lead the world on having correct opinions about important topics” is false, why even if it was true this would not therefore imply people outside LessWrong care about the views of LessWrong, and why your particular strategy to build “mutual knowledge” is misguided & harmful to LessWrongers. So far I have seen zero real engagement on these points from you.
Especially on the “building mutual knowledge benefits other LessWrongers.” (implicitly, mutual knowledge via your post here) point, you have just consistently restated your belief ‘mutual knowledge is good’ as if it were an argument. That is not how arguments work, nor is my argument even “mutual knowledge is bad”.
I see this comment as continued minimal engagement. You don’t have to change your mind! It would be fine to say “I don’t know how to respond to your arguments, they are good, but not convincing for reasons I don’t know why”, but instead you post this conversation on X (previously Twitter)[1] as if its obvious why I’m wrong, calling it “crab bucketing”.
I just feel like your engagement here has been disingenuous, even if polite when talking here on LessWrong.
Note: I don’t follow you on X (previously Twitter), nor do I use X (previously Twitter), but I did notice a change in voting patterns, so I figured someone (probably you) shared the conversation.
What’s the issue with my Twitter post? It just says I see your comment as representative of many LWers, and the same thing I said in my previous reply, that aggregating people’s belief-states into mutual knowledge is actually part of “thinking” rather than “fighting”.
I find the criticism for my quality of engagement in this thread distasteful, as I’ve provided substantive object-level engagement with each of your comments so far. I could equally criticize you for bringing up multiple sub-points per post that leave me no way to respond in a time-efficient way without being called “minimal”, but I won’t, because I don’t see either of our behaviors so far as breaking out of the boundaries of productive LessWrong discourse. My claim about this community’s “crab-bucketing” was a separate tweet not intended as a reply to you.
I have argued both that your argument for why “The goal of LessWrong [...] is to lead the world on having correct opinions about important topics” is false
Ok, I’ll pick this sub-argument to expand on. You correctly point out that what I wrote does not text-match the “What LessWrong is about” section. My argument would be that this cited quote:
[Aspiring] rationalists should win [at life, their goals, etc]. You know a rationalist because they’re sitting atop a pile of utility. – Rationality is systematized winning
As well as Eliezer’s post about “Something to protect”—imply that a community that practices rationality ought to somehow optimize the causal connection between their practice of rationality and the impact that it has.
This obviously leaves room for people to have disagreeing interpretations of what LessWrong ought to do, as you and I currently do.
I think my responses have all had at most two distinct arguments, so I’m not sure in what sense I’m “bringing up multiple sub-points per post that leave [you] no way to respond in a time-efficient way without being called ’minimal’”. In the case that I am, that is also what the ‘not worth getting into’ emoji is for.
(other than this one, which has three)
This obviously leaves room for people to have disagreeing interpretations of what LessWrong ought to do, as you and I currently do.
Room for disagreement does not imply any disagreement is valid.
What’s the issue with my Twitter post? It just says I see your comment as representative of many LWers, and the same thing I said in my previous reply, that aggregating people’s belief-states into mutual knowledge is actually part of “thinking” rather than “fighting”.
It says
LW crab-buckets anyone from elevating an already majority-held object-level belief into mutual knowledge.
But this is not what I’m doing, nor what I’ve argued for anywhere, and as I’ve said before,
nor is my argument even “mutual knowledge is bad”.
For example, I really like the LessWrong surveys! I take those every year!
I just think your post is harmful and uninformative.
> nor is my argument even “mutual knowledge is bad”.
For example, I really like the LessWrong surveys! I take those every year!
What’s the minimally modified version of posting this “Statement of Support for IABIED” you’d feel good about? Presumably the upper bound for your desired level of modification would be if we included a yearly survey question about whether people agree with the quoted central claim from the book?
I would feel better if your post was more of the following form: “I am curious about the actual level of agreement with IABI, here is a list of directly quoted claims made in the book (which I put in the comments below). React with agree/disagree about each claim. If your answer is instead “mu”, then you can argue about it or something idk. Please also feel free to comment claims the book makes which you are interested in getting a survey of LessWrong opinions on yourself”.
I think I would actually strong-upvote this post if it existed, provided the moderation seemed appropriate, and the seeded claims concrete and not phrased leadingly.
Edit: Bonus points for encouraging people to use the probability reacts rather than agree/disagree reacts.
Thanks. The reactions to such a post would constitute a stronger common knowledge signal of community agreement with the book (to the degree that such agreement is in fact present in the community).
I wonder if it would be better to make the agree-voting anonymous (like LW post voting) or with people’s names attached to their votes (like react-voting).
I’m sure this is going too far for you, but I also personally wish LW could go even further toward turning a sufficient amount of mutual support expressed in that form (if it turns out to exist) into a frontpage that actually looks like what most humans expect a supportive front page around a big event to look like (moreso than having a banner mentioning it and discussion mentioning it).
I’m not sure what we disagree about here? This current post is a place where people can publicly declare their stances. Ideally it would have a poll or aggregation feature in the main post, because I’m not even asking people to put their names on their declaration, I’m happy to just have the community infrastructure aggregate a high fraction of people’s existing views into mutual knowledge.
I’m not clear whether you think your argument generalizes to claiming the CAIS statement was skirting too close to group-think too?
What I liked about the CAIS statement was that it enabled people to know that top academics and industry leaders were significantly agreeing on an extremely important belief. Without that kind of mutual-knowledge-building statement, the natural course of events is that it’s actually quite hard to tell what people think about the most basic claims. Even today, I find myself constantly pointing to the CAIS letter when people on the internet claim that AI x-risk is a totally unserious belief.
I think the situation we find ourselves on LW is that the community hasn’t been polled or aggregated in a meaningful way, on a belief that seems to me about as consensus to us as the CAIS statement was to experts in AI.
The strongest part of the argument to me is that “LW isn’t the place for it”. Fair enough, LW can be whatever people want… I just think it’s sad that no existing place was ever formed for this important missing piece of community mutual-knowledge building, except the place founded by the guy and the idea that currently could have used the support.
Aggregating people’s existing belief-states into mutual knowledge seems like good “thinking” to me—part of the infrastructure that makes thinking more effective at climbing the tree of knowledge.
What did the CAIS letter do?
Restating in my own words, it got a bunch of well-respected, legibly smart people in AI and other areas to sign a letter saying they think AI risk is important. This is clearly very useful for people who don’t know anything about AI, since they must defer to some community to tell them about AI, and the CAIS letter was targeting the community society delegates with that task. It worked because the general public cared what the people who signed the letter had to say, and so then people changed their minds.
This statement of support is (as you say) to be used similarly to the CAIS letter. That is, to convince people who care what LessWrongers have to say that AI risk is a big deal. But the only people who care what LessWrongers have to say are other LessWrongers!
This is where the group think comes in, and this is what makes your statement of support both much more useless and much more harmful than the CAIS letter. Suppose your plan works. That there are LessWrongers who get convinced by the number of LessWrongers who have signed your statement of support. Then they will sign the statement, but then that will make the statement more convincing, and more people will sign, and so on. That is the exact dynamic which causes group think to occur.
I’m not saying this is likely, but 1) There are degrees of group think, and even if you don’t have a run-away explosion of group think, you can cause more or less of it, and this definitely seems like it’d cause more group think, and 2) This is your plan, this is how the CAIS letter worked. If this sequence of events is unlikely, then your plan is unlikely to even work.
Not like this, this is not how unbiased, truth-seeking surveys look!
I disagree with that premise. The goal of LessWrong, as I understand it, is to lead the world on having correct opinions about important topics. I would never assume away the possibility of that goal.
Well then you’re wrong on both counts, as well as the reasoning you’re using to derive the second count from the first.
LessWrong is not about leading the world, indeed note that there is a literal about page with a “What LessWrong is About” section, which of course says that LessWrong is about “apply[ing] the rationality lessons we’ve accumulated to any topic that interests us”, with not a lick of mention about leadership or even communities other than LessWrong except to say “Right now, AI seems like one of the most (or the most) important topics for humanity”, which does not sound like LessWrong is trying for a leadership role on the subject (though individual LessWrongers may).
Second, even if it were true that LessWrong’s goal is to have others care what LessWrong thinks, we do not have any guarantee LessWrong is succeeding at that goal. so your inference is simply invalid and frankly absurd.
Edit: Perhaps this is where we disagree too, about the purpose of LessWrong, somewhere along the line you got the mistaken impression that LessWrong would like to try leading.
I’m happy to agree on the crux that if one accepts “the only people who care what LessWrongers have to say are other LessWrongers” (which I currently don’t), then that would weaken the case for mutual knowledge — I would say by about half. The other half of my claim is that building mutual knowledge benefits other LessWrongers.
I have argued both that your argument for why “The goal of LessWrong [...] is to lead the world on having correct opinions about important topics” is false, why even if it was true this would not therefore imply people outside LessWrong care about the views of LessWrong, and why your particular strategy to build “mutual knowledge” is misguided & harmful to LessWrongers. So far I have seen zero real engagement on these points from you.
Especially on the “building mutual knowledge benefits other LessWrongers.” (implicitly, mutual knowledge via your post here) point, you have just consistently restated your belief ‘mutual knowledge is good’ as if it were an argument. That is not how arguments work, nor is my argument even “mutual knowledge is bad”.
I see this comment as continued minimal engagement. You don’t have to change your mind! It would be fine to say “I don’t know how to respond to your arguments, they are good, but not convincing for reasons I don’t know why”, but instead you post this conversation on X (previously Twitter)[1] as if its obvious why I’m wrong, calling it “crab bucketing”.
I just feel like your engagement here has been disingenuous, even if polite when talking here on LessWrong.
Note: I don’t follow you on X (previously Twitter), nor do I use X (previously Twitter), but I did notice a change in voting patterns, so I figured someone (probably you) shared the conversation.
What’s the issue with my Twitter post? It just says I see your comment as representative of many LWers, and the same thing I said in my previous reply, that aggregating people’s belief-states into mutual knowledge is actually part of “thinking” rather than “fighting”.
I find the criticism for my quality of engagement in this thread distasteful, as I’ve provided substantive object-level engagement with each of your comments so far. I could equally criticize you for bringing up multiple sub-points per post that leave me no way to respond in a time-efficient way without being called “minimal”, but I won’t, because I don’t see either of our behaviors so far as breaking out of the boundaries of productive LessWrong discourse. My claim about this community’s “crab-bucketing” was a separate tweet not intended as a reply to you.
Ok, I’ll pick this sub-argument to expand on. You correctly point out that what I wrote does not text-match the “What LessWrong is about” section. My argument would be that this cited quote:
As well as Eliezer’s post about “Something to protect”—imply that a community that practices rationality ought to somehow optimize the causal connection between their practice of rationality and the impact that it has.
This obviously leaves room for people to have disagreeing interpretations of what LessWrong ought to do, as you and I currently do.
I think my responses have all had at most two distinct arguments, so I’m not sure in what sense I’m “bringing up multiple sub-points per post that leave [you] no way to respond in a time-efficient way without being called ’minimal’”. In the case that I am, that is also what the ‘not worth getting into’ emoji is for.
(other than this one, which has three)
Room for disagreement does not imply any disagreement is valid.
It says
But this is not what I’m doing, nor what I’ve argued for anywhere, and as I’ve said before,
For example, I really like the LessWrong surveys! I take those every year!
I just think your post is harmful and uninformative.
What’s the minimally modified version of posting this “Statement of Support for IABIED” you’d feel good about? Presumably the upper bound for your desired level of modification would be if we included a yearly survey question about whether people agree with the quoted central claim from the book?
I would feel better if your post was more of the following form: “I am curious about the actual level of agreement with IABI, here is a list of directly quoted claims made in the book (which I put in the comments below). React with agree/disagree about each claim. If your answer is instead “mu”, then you can argue about it or something idk. Please also feel free to comment claims the book makes which you are interested in getting a survey of LessWrong opinions on yourself”.
I think I would actually strong-upvote this post if it existed, provided the moderation seemed appropriate, and the seeded claims concrete and not phrased leadingly.
Edit: Bonus points for encouraging people to use the probability reacts rather than agree/disagree reacts.
Thanks. The reactions to such a post would constitute a stronger common knowledge signal of community agreement with the book (to the degree that such agreement is in fact present in the community).
I wonder if it would be better to make the agree-voting anonymous (like LW post voting) or with people’s names attached to their votes (like react-voting).
I’m sure this is going too far for you, but I also personally wish LW could go even further toward turning a sufficient amount of mutual support expressed in that form (if it turns out to exist) into a frontpage that actually looks like what most humans expect a supportive front page around a big event to look like (moreso than having a banner mentioning it and discussion mentioning it).
Again, the separate tweet about LW crab-bucketing in my Twitter thread wasn’t meant as a response to to you in this LW thread.
I agree that “room for disagreement does not imply any disagreement is valid”, and am not seeing anything left to respond to on that point.