Since the livejournal link is defunct, here’s an archive link: https://web.archive.org/web/20180101160950/http://squid314.livejournal.com/324957.html
lesswrong.com/tagvoting isn’t loading for me. All I get is the error ” Error: TypeError: Cannot read property ‘isEvent’ of null”. I’m using Chrome, if that’s relevant.
Productivity seems to include both “improve productivity by fighting akrasia” and “improve productivity by optimizing your workflows”, for example What’s your favorite notetaking system?, so it’s not a full overlap.
Procrastination is the tag that feels most redundant next to Akrasia to me.
+1 Dissolving Questions
I put 1-3 on most posts, but I’ve gone up to 5 or more on some. Probably many of the posts I’ve tagged could have other tags applied to them that I didn’t think of at the time. It’s not about a hard number, it’s about asking for each individual tag, is it likely someone exploring this tag would think this post was relevant to it?
I notice that when you try to tag a post but its relevance was in the negatives and your vote doesn’t bring it above zero, the site doesn’t give any feedback. It looks like there was a bug, or your connection messed up and didn’t submit the tag properly.
This is somewhat mitigated now by being able to look at the tag voting page to confirm that you vote went through, but a lot of people won’t know to do that.
This raises the question of what serial fiction posts should be tagged as, because some of the posts you untagged are now at the top of the untagged posts page.
Maybe we could have “serial fiction” as a containment tag much like “Newsletters”.
Or are we going for a norm where some posts do not merit any tag at all, and the untagged posts page is doomed to become a list of them?
My sense is that Empiricism is specifically about experimentation and making beliefs pay rent in anticipated experiences, while a lot of the posts in Science are about academia and the social institutions of science.
There’s some overlap between them. Additionally, Empiricism overlaps with Anticipated Experiences and Science overlaps with Replication Crisis.
Empiricism was added more recently. I went back and tagged some of the posts in Science as Empiricism.
My suggestions for changes/merges:
Change Alpha(algorithm family) to DeepMind, which would then include DM’s other projects like Agent57 and MuZero. I think it’s what more people would look for and it has more forwards compatibility.
Merge Blues and Greens and Coalitional Instincts; they’re about basically the same thing. I don’t like either name; “Tribalism” would probably be better. Blues and Greens is jargon that’s not used enough, and coalitional instincts is too formal.
Merge Good Explanations(advice) into Distillation and Pedagogy. Distillation and Pedagogy is slightly broader, but not enough for good explanations to need to be its own tag.
I agree with most of your analysis in the comments (many downsides to karma, multiple choice has some advantages intuition-wise and makes it easier for a single user to make an ordering), but I thought of a couple more points. My mind seems to only be coming up with downsides of the multiple choice system, which might be because I’m prone to rationalizing why the status quo is good.
Multiple choice has strategic voting implications too. If I think a 150 karma post and a 50 karma post are both “Top” relevance, but that the 50 karma post is better, I might rate the 150 karma post as “high” or lower.
Multiple choice makes it harder to see where in the ordering your vote would make a post end up. Additionally, your vote either has no immediate effect or moves the post around by a lot, so a fine-grained adjustment is impossible. That might not necessarily be bad though, if post karma mattering is desired.
If ordering is based only on the median vote, this makes it easy for a troll to vandalize a tag page even when the tagging system is mature. Just put the tag on a bunch of posts that don’t already have it and rate them all “Top”. With karma, the post order is more stable once a lot of people have voted. (This is the double edge of making it easy for a single person to have a big impact.)
However, these concerns balance out against the benefits you listed, so overall I don’t have a strong opinion on which is better.
PSA: Voting on relevance is an important, underserved, and easy to contribute to area of the tagging system.
One person can create a tag, make a good description, and find a bunch of posts that fit it, but it takes multiple people’s votes to create a decent ordering of posts from most to least relevant. Which posts are listed first will be an important part of the user experience.
This will be especially important for the more crowded tags, like the core tags, history, math, science, statistics, ai risk, and so forth.
Contributing can be as easy as just going through the list and upvoting posts that you’ve read and think are a good fit for the tag.
Edit: It would be nice to have a spreadsheet sorting tags by something like average relevance karma per post, to identify which tags most need votes.
Diseased disciplines: the strange case of the inverted chart is an interesting case because tagging it correctly feels like a spoiler.
The tags page is occasionally suddenly replacing itself with the message “Error: TypeError: Cannot read property ‘_id’ of null”, forcing me to reload the page. Has anyone else seen this?
Edit: I also got the same error on the page for a post, when I added a tag, the server response was slow, and I tried to add it again.
I think there should be a tag for discussion of present-day AI progress outside of the context of alignment. For example “Understanding Deep Double Descent” https://www.lesswrong.com/posts/FRv7ryoqtvSuqBxuT?lw_source=posts_sheet . Right now the only tag for that is the core tag “AI”, which is too broad.
But I’m not sure what to call it. Ideas: “Prosaic AI”, “Machine Learning”, “Neural Networks”, “AI Progress”, “AI Capabilities”.
A way I can contribute to the site without having to come up with brilliant original ideas? Excellent!
It looks like two of the predictions, that the majority of teacher-student interactions would be remote and that the majority of meetings would be remote, have flipped from false to true between 2019 and 2020, but because of a global pandemic rather than directly proceeding from advancements in technology.
I’ve had tulpas for about seven years. I alternate between the framework of them all being aspects of the same person versus the framework of them being separate people. I’ll have internal conversations where each participant is treating the other as a person, but in real life I mostly act as a single agent.
Overall I would say their effect on my intelligence, effectiveness, skills, motivation, etc. has been neither significantly positive nor significantly negative. I consider the obvious objections to be pretty true—your tulpa’s running on the same hardware, with the same memories and reflexes, and you have to share the same amount of time as you had before. On the other hand I escaped any potential nightmare scenarios by having tulpas that are reasonable and cooperative.
When people in the tulpa community talk about the benefits, they usually say their tulpa made them less lonely, or helped them cope with the stresses of life, or helped them deal with their preexisting mental illness. And even those benefits are limited in scope. The anxiety or depression doesn’t just go away.
I think on of the main ways tulpas could help with effectiveness has to do with mindset and motivation. It’s the difference between a vague feeling that maybe you ought to be doing something productive and your anime waifu yelling at you to do something productive. Tulpas may also have more of an ability to take the outside view on important decisions.
Overall if you’re just looking for self-improvement, tulpa creation is probably not the best value for your time. I mostly got into it because it seemed fun and weird, which it fully delivered on.
Vote manipulation with sockpuppets, apparently.
A non-religion related example that I think Eliezer also talked about is “the power of positive thinking”. Suppose someone hears the claim “If you believe you will succeed, then you will.” and believes it. However, this person is unable to convince himself that he can succeed at his goals. He believes that believing in his own ability is virtuous (belief in belief), but he doesn’t actually hold the belief.
… And now we’ll forever suspect that anyone with a good idea is actually an alien invader masquerading in a human body.