I’m an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality.
Longer bio: www.lesswrong.com/posts/aG74jJkiPccqdkK3c/the-lesswrong-team-page-under-construction#Ben_Pace___Benito
I’m an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality.
Longer bio: www.lesswrong.com/posts/aG74jJkiPccqdkK3c/the-lesswrong-team-page-under-construction#Ben_Pace___Benito
This post reminded me of the exercises in Calibrating with Cards, a post which very nicely advises what to pay attention to during magic practice.
Correct: “the screen [of a phone] can be used not only to see the people you call but also for studying documents and photographs and reading passages from books.” I feel like this would’ve been an impressive prediction in 2004.
This is an excellent prediction for 1964, and I respect Asimov a great deal for this.
Hm, k, have added an edit.
The same as always. Karma score, with a hint of magic (i.e. putting new comments higher for a period on the order of a few hours). As it says in the OP section titled “How the system works”, agree/disagree voting has no effect on sorting.
(It’s exclamation point then greater than symbol, not the other way around. Answer in FAQ. Have edited your comment to add it.)
(Having said that, it looks like OP has done great work and this is a big red flag:)
I have attempted to make a comment on SMTM’s post linking to many of those studies, but they have not approved the comment. I have also attempted to contact them on Twitter (twice) and through email, but have not received a reply. All of this was over one week ago, and they have, since then, replied to other people on Twitter and approved other comments on their post, but haven’t commented on this. So I have no idea why their literature review excludes these studies.
This isn’t the core of why I think you think that’s a red flag, but for the record I don’t think a week is that much time to respond to public criticism. I have many important emails I don’t reply to for longer.
Nothing much. It’s probably the right call to just remove self-agreeing.
Pardon the confusion. It was frontpaged, I saw your comment, then moved it back to personal blog. The thought didn’t occur to me that you would then be mildly gaslit about your comment!
And no, everything including crossposts get manually processed and frontpaged-or-not. Occasional simple errors make it through. Thx MondSemmel for the comment that pointed this one out.
Something I’m hoping to see (and would constitute positive evidence for me) would be seeing a comment with a high/low agree score, and someone responding along the lines of “Huh, seems like lots of people agree/disagree with this comment, which seems wrong to me, let me flesh out a counterargument here” and that post leading to many users changing their minds, and future comments about that point getting a v different agree/disagree score.
Fair point.
Hm, I think there are lots of examples. First to come to mind is a recent reply to Eliezer by Holden, of which I think a severe criticism was respectfully described like this:
Something like half of this post is blockquotes. I’ve often been surprised by the degree to which people (including people I respect a lot, such as Eliezer in this case) seem to mischaracterize specific pieces they critique , and I try to avoid this for myself by quoting extensively from a piece when critiquing it.
And lines like this:
“Most of Eliezer’s critique seems directed at assumptions the report explicitly does not make about how transformative AI will be developed, and more broadly, about the connection between its (the report’s) compute estimates and all-things-considered AI timelines.”
Adding for redundancy: you don’t have to double click fast. Clicking two times with any time gap between them works. So you can tap slower than the zoom action.
Yes, it’s never an equilibrium state for Eliezer communicating key points about AI to be the highest karma post on LessWrong. There’s too much free energy to be eaten by a thoughtful critique of his position. On LW 1.0 it was Holden’s Thoughts on the Singularity Institute, and now on LW 2.0 it’s Paul’s list of agreements and disagreements with Eliezer.
Finally, nature is healing.
A few responses:
Opt-in sounds like a lot of cognitive overhead for every single comment, and also (in-principle) allows for people to avoid having the truth value of their comments be judged when they make especially key claims in their argument.
Re “what if a single comment states multiple positions? You might agree with some and not with others” ← I expect the result is that (a) the agree/disagree button won’t be used that much for such comments, or (b) it just will be less meaningful for such comments. Neither of these seem very costly to me.
Re “what if you’re uncertain if you’ve understood the commenter’s position… The vote is biased by people who think they correctly understood the position.” ← If lots of people agree with a given comment because of a misunderstanding, making this fact known improves others’ ability to respond to the false belief. In general my current model is that while consensus views can surely be locally false, understanding what is the consensus helps to respond to it faster and with more focus and discover the error.
Re “what if the comment isn’t an opinion, it’s a quote or a collation of other people’s perspectives?” ← Seems like either the button won’t get much use or will be less meaningful than other occasions. Note that there are many comments on the site who also don’t get many upvote/downvotes, and I don’t consider this a serious reason to make up/downvoting optional on comments just because it’s often not used.
One more thing is that my guess is the agree/disagree voting axis will encourage people to split up their comments more, and state things that are more cleanly true or false. (For example, I felt this impulse to split up these two comments upthread.)
Though that raises the interesting question of whether some apparent strengths may turn into deficiencies...
I, for one, am interested in how the rest of this sentence goes.
My first thought against would be that it would end up pretty misleading. Like, suppose the recent AGI lethalities post had this, and Eliezer picked “there is at least a 50% chance of extinction risk from AGI” as the claim. Then I think many people would agree with it, but that would look (on first glance) like many people agreed with the post, which actually makes a way more detailed and in-depth series of claims (and stronger claims of extinction), and create a false-consensus.
(I personally think this is the neatest idea so far, that allows the post author to make multiple truth-claims in the post and have them independently voted on, and doesn’t aggregate them for the post overall in any way.)
Ideally I would link to concrete examples but I’m afraid it would come across as me calling out someone else, especially if they believe they put in their best effort in writing a serious essay, so I will have to leave it to your imagination.
For the record, I think critiques can accurately describe someone’s writing critically without being an unreasonable aggression, and I think critiques are much better for concreteness. I think your post would be 2-5x as valuable for me if I had concrete posts in mind as what you were pointing to, when you discuss old posts that were better than new posts, or posts that use jargon more than academia to an excessive degree.
Thanks!
Thanks so much for talking to the folks involved and writing this note on your conclusions, I really appreciate that someone did this (who I trust to actually try to find out what happened and report their conclusions accurately).