I suggest you don’t include such unrelated politics in your posts at all. They actively detract from the main issues under discussion, and prime people for tribalist attitudes. Make a separate post about racism if you want, but don’t use it as an offhand example for a post on education.
Rudi C(Luna Rimar)
SlateStarCodex deleted because NYT wants to dox Scott
Creating better infrastructure for controversial discourse
[Question] How do you study a math textbook?
What about greaterwrong.com?
Scott can be wrong though. In fact, if his blog does get shut down, that is a major update against his conciliatory world-views. That post is also years old. It might not even be his current position.
Another complication is that in the current climate of victim-worshiping, he has incentives not to act belligerently himself (or ask others to do so). Other people retaliating for Scott without his request will be much better for his reputation. (What I’m trying to say is, he has incentives to underplay his desire for revenge. Obviously I am not in his head so all this is mere speculation.)
[Question] Why hasn’t there been research on the effectiveness of zinc for Covid-19?
There is a power imbalance in place. It’s not like NYT is engaging this side in its decision. It’s also true that NYT’s norms are self-serving while hurting others. And this community does not have anywhere near the power to “cancel” NYT. Even if we assume the “mistake theory”, making NYT hurt a bit (which is the strongest response this community can hope for) is necessary for creating a feedback loop. Mistakes are seldom corrected when their prices are paid by others.
This is a complex claim not backed by a lot of evidence. My heuristics scream pseudoscience.
(First read my comment on the sister comment: https://www.lesswrong.com/posts/hKNJSiyzB5jDKFytn/open-and-welcome-thread-may-2021?commentId=iLrAts3ghiBc37X3j )
I looked at the 80k page again, and I still don’t get their model; They say the bottleneck is people who have PhDs from top schools (an essentially supply-gated resource), and can geographically work in the FAI labs (a constant-ish fraction of the said PhD holders). It seems to me that the main lever to increase top school PhD graduates is to increase funding and thus positions in AI-related fields. (Of course, this lever might still take years to show its effects, but I do not see how individual decisions can be the bottleneck here.)
As said, I am probably wrong, but I like to understand this.
I am just speaking from general models and I have no specific model for FAI, so I was/am probably wrong.
I still don’t understand the bottleneck. There aren’t promising projects to get funded. Isn’t this just another way of saying that the problem is hard, and most research attempts will be futile, and thus to accelerate the progress, unpromising projects need to be funded? I.e., what is the bottleneck if it’s not funding? “Brilliant ideas” are not under our direct control, so this cannot be part of our operating bottleneck.
- 5 May 2021 7:21 UTC; 10 points) 's comment on Open and Welcome Thread—May 2021 by (
[Question] What video games are more famous in our community than in the general public?
This is absolutely false. Here in Iran selling kidneys is legal. Only desperate people do sell. No one sells their kidneys for something trivial like education.
You should create a github.com repo ala the Awesome lists (e.g., https://github.com/hellerve/programming-talks). Lesswrong does not lend itself well to these collaborative community resources, as evidenced by the death of The Best Textbooks on Every Subject.
Reading AI alignment posts on here has made me realize how a lot of these ideas can potentially also apply to societal structures. Our social institutions are kind of like an AI system that uses humans for its computing units. Unfortunately, our institutions are not that “friendly”. In fact, badly aligned institutions are probably a major cause of unprogress in the developing world. Has there been much thought/discussion on these topics? Is there potential for adapting AI safety research to social mechanism design?
One relevant point is that remote work might be a “disruptive” technology: cheaper, more suitable for certain niches, etc, but not as good as the traditional thing. As time passes and the technology matures, it might claim increasing niches, such that in the end it surpasses or becomes an essential additive to the traditional technology.