Hello! I work at Lightcone and like LessWrong :-). I have made some confidentiality agreements I can’t leak much metadata about (like who they are with). I have made no non-disparagement agreements.
kave
Divergence uses a nabla not a delta
To clarify: “coherence” here means that your credences obey the probability axioms?
I wrote this tl;dr for a friend, and thought it worth sharing. I’m not sure it’s accurate. I’ve only read the “Recap”
Here is how I understand it.
Suppose that, depending on the temperature, your mirror might be foggy and you might have goose pimples. As in, the temperature helps you predict those variables. But once you know the temperature, there’s (approximately) nothing you learn about the state of your mirror from your skin, and vice versa. And! Once you know whether your mirror is foggy, there’s basically nothing left to learn about the temperature by observing your skin (and vice versa).But you still don’t know the temperature once you observe those things.
This is a stochastic (approximate) natural latent. The stochasticity is that you don’t know the temperature once you know the mirror and skin states.
Their theorem, iiuc, says that there does exist a variable where you (approximately) know its exact state after you’ve observed either the mirror or your skin.
(I don’t currently understand exactly what coarse-graining process they’re using to construct the exact natural latent).
(Remember that, IIRC, we still have the misfeature that you can’t strong upvote your own comments. Perhaps you mention this, I haven’t read much of your comment or these threads)
This strikes me mostly as an argument for cheaper housing!
Curated. I thought this was a pretty interesting result. I’m not sure if I should have been surprised by it, but I was. They also point to a decent amount of interesting follow-up work, though I expect that to generalise less well than “existence proof” papers like this one.
I often enjoy this group’s work finding interesting “info leaks” in LLM behaviour, like their previous lie detector work.
If by “very widespread” you mean like ~10% of votes, I disagree. Do you mean that?
If only a handful of people did as you propose to do—then much of the usefulness would be lost, though not most.
If by “much of the usefulness would be lost” you mean something like “people would see comments that they liked <90% as much” or “people would get less than 90% of the information about what some kind of weighted LessWrong-consensus thought”, I disagree. Do you mean that?
The obvious consequence of such a norm is comments having to say things like “don’t upvote this comment too much, because otherwise you will be robbing me of replies which I would otherwise get” or “please downvote this comment, so that it gets pushed down, so that I can get people replying to me, instead of staying silent because of a weird and incidental fact of comment section sorting”. This would be very bad, obviously.
I agree it’s obvious that it at least pushes some in this direction. I think some versions of this could be very bad, though mostly it would be not that bad.
The things which you are trying to do with the karma system would destroy its usefulness.
By “destroy its usefulness”, how much less useful do you mean to say it would become?
Comments that hijack an unrelated thread can be downvoted (and thus hidden), or their authors censured for abusing the commenting system. This is a non-problem on Less Wrong.
I didn’t mean, in that comment, to imply that Ben was hijacking. I was just trying to provide at least one example of a pathological interaction with threading and karma.
Having refreshed myself on Ben’s comment and its parent, I now think he was doing something continuous with hijacking.
I think you’re asking if the whole mod team agrees with “the LW karma system is NOT robust, well-implemented and generally used very properly by users”.
I think in general the LW team thinks that the karma system is generally used properly by users (not sure about “very properly”, for example I think we’re probably not skilled enough, as a userbase, at using it. Habryka might even disagree with “properly”, because he so strongly wants more. downvotes).
I don’t know what opinions people have about the implementation. I think, for example, most people on the team think that agreement voting is quite good, that having weak/strong votes is good, and that our vote scaling is good.
For “robust”, I think most people think it fails sometimes on the actual website, and not just in possible corner cases.
I think your comment was a little bit “cheating” against LW’s systems, and thus deserving of a little downvote. I don’t know if a norm exists against this kind of cheating, but I think it should.
IIRC, I kinda perceived that you were trying to pushback against a general vibe spread throughout the comment section. Your comment is basically not engaging with cata’s comment at all. You reference the video, which cata doesn’t, and you reference “believing everyone is doing the best they can”, which is not something cata says. You were pushing against the general zeitgeist, and you did it in a way that uses a quirk of the commenting system to give it prominence.
I think you should have written a top-level comment pushing back against the other comments, perhaps linking to them. And then the karma system could have buoyed it to the top, or not.
I (genuinely) appreciate the downvotes, disagrees and critical comments. I want to expand a bit on my thinking, so people can give me pushback that will convince me more.
In threaded commenting systems like ours, replying to popular comments makes your comment much more visible than it would be as a top-level comment. This feature is common enough, that on Reddit (another threaded commenting system) people talk about “hijacking” a top comment to broadcast something they’re interested in.
I think it’s pretty plausible that people should have a higher bar for commenting on highly upvoted comments (especially ones without much discussion underneath them). I’m definitely not sure though; if I imagine people doing that, I worry about the responses that get lost.
But the responses do come with a cost. Take a look at www.lesswrong.com/moderation: I don’t think it would be better if more people had to read the rejected content (enough people for each piece of content that it got sufficiently downvoted). So I think it does seem worth, in some circumstances, risking a bit of silence for a better signal:noise ratio.
On a slightly more meta level: lately I’ve been thinking about Well-Kept Gardens Die By Pacifism and its exhortations to downvote more. As Ben Pace once said: sometimes, when I notice I’m undershooting a goal, I like to make sure I overshoot it for a bit, and then dial back. So I am trying out more votes of “I wish this comment hadn’t happened to me at this point”, regardless of whether it was a good comment in some contextless way.
I’d also been reading this thread on why people don’t explain their downvotes, and that tipped me even further in the direction of explaining my downvote, as I’d get a bit more data on what it was like (of course, every good downvote is alike, and every bad downvote is bad in its own way, so I’m sure I’m only getting a very particular sample).
Mod note: this post triggered some of our “maybe written by LLM flags”. On sampling parts of the post, I think it’s mostly not written by an LLM.
Separately, having skimmed the post, it seems like it’s an attempt at establishing and reasoning about a potential regularity. I’m not trying to endorse the proposed hypotheses for explaining the regularity (I didn’t quite read enough of this post to even be sure what the hypotheses were).
I’m aware! It doesn’t help because I had to read the subthread first to see if I want to read it.
(I’ve downvoted this for a rather unusual reason: it starts a big thread under the top most comment, and I wish it wasn’t in the way of seeing the next-most-upvoted comment.
This is pretty weird, cos I think this is a fine comment in isolation. I’m trying out downvoting more (that is, being a more active “micro-moderator”) and I’m sure I’ll do it wrong in lots of situations).
Just an oversight this time
Curated. it’s good. I’m very glad to see more high quality fiction on LessWrong, and would like to curate more of it.
(Most of the mod team agreed about your earlier post after it was promoted to attention, so it’s been Frontpaged. We have a norm against “inside baseball” on the frontpage: things that are focused too much on particular communities associated with LW. I think the mod who put it on Personal felt that it was verging on inside baseball. The majority dissented)
This is a bit of an aside, but I hesitate to be too shocked by differences in funding:DALY ratios. After all, what you really want to know is change in DALYs at a given level of funding. It seems pretty plausible that some diseases are 10x (or even 100x) as cost-effective to ameliorate as others.
That said, funding:DALY seems like a fine heuristic for searching for misallocated resources. And to be clear, I expect it’s not actually a difference in cost-effectiveness that’s driving the different spending, but I’d want to check before updating too much.
I would definitely consider collaborative filtering ML, though I don’t think people normally make deep models for it. You can see on Recombee’s website that they use collaborative filtering, and use a bunch of weasel language that makes it unclear if they actually use anything else much at all
I try pretty hard (and I think most of the team does) to at least moderate AI x-risk criticism more leniently. But of course, it’s tricky to know if you’re doing a good job. Am I undercorrecting or overcorrecting for my bias? If you ever notice some examples that seem like moderation bias, please lmk!
Of course, moderation is only a small part of what drives the site culture/reward dynamics