People dropping in on an unfamiliar website can have very hair-trigger reactions on any sort of AI art. I heard someone say they felt like immediately writing off a (good) Substack post as fake content they should ignore because of the AI art illustration at the top of the post. And I think the illustration generator is a built-in option on Substack because I see constant AI illustrations on Substacks of people who are purely writers who as far as I can tell who aren’t very interested in art or web design. But this person wasn’t familiar with Substack, so their brain just went “random AI slop site, ignore”.
rsaarelm
Bad call. You don’t exactly have an unlimited supply of people who have a solid handle on the formative LW mindset and principles from 15 years ago and who are still actively participating on the forums, and latter-day LessWrong doesn’t have as much of a coherent and valuable identity to stand firmly on its own.
A key idea in the mindset that started LessWrong is that people can be wrong. Being wrong can exist as an abstract thing to begin with, it’s not just an euphemism for poor political positioning. And people in positions of authority can be wrong. Kind, well-meaning, likable people can be wrong. People who have considerate friendly conversations that are a joy to moderate can be wrong. It’s not always easy to figure out right and wrong, but it is possible, and it’s not always socially harmonious to point it out loud, but it used to be considered virtuous still.
A forum that has principles in its culture is going to have cases where moderation is annoying around something or someone who doggedly sticks to those principles. It’s then a decision for the moderators whether they want to work to keep the forum’s principles alive or to have a slightly easier time moderating in the future.
How did you connect the objects you see as glowing with UV light specifically? Couldn’t the glow be a hallucination or a perceptual rewiring like the persistent “breathing wallpaper” LSD users can start seeing, or some different physical property entirely? Can you see UV light emitted by machines that should be invisible like a person in the newscientist link claims he could after he got an artificial lens?
On leaving hospital, I decided I deserved a pint of bitter. Standing at the bar of my local pub, I noticed that their device for detecting counterfeit banknotes was emitting very bright bluish light. I mentioned this to the barman, who looked at me with a very quizzical expression but made no comment. I then realised that he couldn’t see the light: it was visible through my right eye alone.
Huh, apparently this is a thing. Retinas can see into UV, but UV light is normally filtered out by the lens.
Medical article about cataract surgeries and UV protection
No idea how you could start seeing UV by rewiring your brain if your eyeballs still have the original lenses though.
what do you mean by “psychic pain”?
As far as I’ve understood, a big idea with dukkha is that you have an intense desire for things to not be the way you perceive them to be, even though you might not have any concrete means of changing things, and the psychic pain is your constant awareness that reality isn’t the way you want. “Regret” and “yearning” both seem like good words to describe types of this, though you probably want to imagine the more extreme versions of both, not just mild wistfulness.
If you’ve looked into predictive processing, this sounds familiar. The low-level story might be something like being stuck with persistent predictive errors where you can neither update your model or act to change your circumstances.
Presumably we have the capacity to suffer because it facilitated our survival somehow. How are you so sure you don’t need to hear the message suffering was sending you?
We could try to figure this out by teaching lots of people to get rid of suffering and then watching them to see if it fucks them up.
This seems more like referring to a completely different thing, which is not suffering, but calling it “suffering” for some reason.
Well, yes. It’s a translation of a word that’s used somewhat like technical vocabulary in meditation tradition. Article about the translations, Dukkha is a bummer.
Meditation isn’t supposed to make the pain go away, it’s supposed to train you to suffer less from the pain. So for meditation the idea would just be to make it a game of seeing if you can train to do it longer and better despite the averse feelings, and learn to detach from the feelings. But if your internal motivation is toast, that might be hard to make happen. You might try seeing an actual meditation teacher about this.
The point where you have started the task and then get bored and stop sounds like the key point here. Can you try focusing (maybe in the sense of Gendlin’s focusing) on what goes on in detail with your mind, the task and how you’re conceptualizing working on the task right now and in the future? My experience is that I can procrastinate on starting a task but when I do start it, if it’s something like housework I’ve done it hundreds of times before and can go on autopilot.
I guess it’s mostly housework-like autopilotable stuff for me now. After 20 years of trying I threw in the towel on getting myself to do stuff that’s difficult, I don’t really want to do, and not doing it won’t literally get me killed, which pretty much crashed the whole studying-and-employment pipeline.
If the story doesn’t have a title, but people want to discuss it, some kind of title will get established by convention, and it should be something that “sounds right” for the story. “Clarity didn’t work...” doesn’t sound right to me as the title of the story. Also now that you reminded me of this, I went “wait, wasn’t the story just commonly known as The Whispering Earring”, and indeed, here’s /r/rational calling it that in 2019.
Croup and Vandemar, where one is a talky weasel and the other is a simple-minded bruiser, seem to be the Evil Duo subtype.
There’s maybe a stronger definition of “vibes” than Rafael’s “how it makes the reader feel”, that’s something like “the mental model of the kind of person who would post a comment with this content, in this context, worded like this”. A reader might be violently allergic to eggplants and would then feel nauseous when reading a comment about cooking with eggplants, but it feels obvious it wouldn’t then make sense to say the eggplant cooking comment had “bad vibes”.
Meanwhile if a poster keeps trying to use esoteric Marxist analysis to show how dolphin telepathy explains UFO phenomena, you’re might start subconsciously putting the clues together and thinking “isn’t this exactly what a crypto-Posadist would be saying”. Now we’ve got vibes. Generally, you build a model, consciously or unconsciously, about what the person is like and why they’re writing the things they do, and then “vibes” are the valence of what the model-person feels like to you. “Bad vibes” can then be things like “my model of this person has hidden intentions I don’t like”, “my model of this person has a style of engagement I find consistently unpleasant” or “my model is that this person is mentally unstable and possibly dangerous to be around”.
This is still somewhat subjective, but feels less so than “how the comment makes the reader feel like”. Building the model of the person based on the text is inexact, but it isn’t arbitrary. There generally needs to be something in the text or the overall situation to support model-building, and there’s a sense that the models are tracking some kind of reality, even though inferences can go wrong, different people can pay attention to very different things. There’s still another complication that different people also disagree on goals or styles of engagement, so they might be building the same model and disagree on the “vibes” of it. This still isn’t completely arbitrary, most people tend to agree that the “mentally unstable and possibly dangerous to be around” model has bad vibes.
I feel like it’s a thing where you should use human moderator judgment once the account isn’t new. Figure out how the person is being counterproductive, warn them about it, and if they keep doing the thing, ban them. Ongoing mechanisms like this make sense for something like Reddit where there is basically zero community at this point, but on LW if someone is sufficiently detached from the forum and community that it actually makes sense to apply a mechanical paper cut like the rate limit on them after years of them being on site and accumulating positive karma, they probably shouldn’t be here to begin with.
The basic problem is that it’s not treating the person as a person, like a human moderator actually talking to them and going “hey, we think you’re not helping here, here’s why … in the future could you …” (and then proceeding to a ban if there’s no improvement) would be. People occasionally respond well the moderator feedback, but being hit by the rate limiter robot is pretty likely to piss off just about any invested and competent person and might also make them go “cool, then I’ll treat your thing as less of a community made of people and more like a video game to beat on my end as well”, which makes it less likely for things to be improved in the future.
How does it make sense to just run the rate limiter robot equally on everyone no matter how old their account and how much total karma they have? It might make sense for new users, as a crude mechanism to make them learn the ropes or find out the forum isn’t a good fit for them. But presumably you want long-term commenters with large net-positive karma staying around and not be annoyed by the site UI by default.
A long-term commenter suddenly spewing actual nonsense comments where rate-limiting does make sense sounds more like an ongoing psychotic break, in which case a human moderator should probably intervene. Alternatively, if they’re getting downvoted a lot it might mean they’re engaged in some actually interesting discussion with lots of disagreement going around, and you should just let them tough it out and treat the votes as a signal for sentiment instead of a signal for signal like you do for the new accounts.
Out of curiosity, what evidence would change your mind?
This one seems pretty easy. If multiple notable past contributors speak out themselves and say that they stopped contributing to LW because of individual persistently annoying commenters, naming Said as one of them, that would be pretty clear evidence. Also socially awkward of course. But the general mindset of old-school internet forum discourse is that stuff people say publicly under their own accounts exists and claimed backchannel communications are shit someone made up to win an argument.
Just gonna chime in that I agree with Said here about this not just a two-way thing but a question of what the audience gets to see as well. I think his comments on your posts are valuable and banning him makes things worse as far as I’m concerned.
It might be that they’re one novel thing he could both discern as a specific thing and pretty much completely understand what their purpose is once he started paying attention to them. Just about everything in a modern city is an unfamiliar thing tied to a large context of other unfamiliar things, so you’ll just zone out when you’re missing the context, but stairs and carpets are pretty much just stairs and carpets.
“Clarity didn’t work, trying mysterianism” is the title of a short story by Scott Alexander
Was it the title? I always thought Scott used the phrase as commentary on why he was posting the story, same as gwern is doing here. As in, he tried to clearly say “an omnipresent personal AI agent that observes your life and directly tells you the best way to act in every situation you encounter would be a bad thing because building up your own mind into being able to overcome challenging situations is necessary for a meaningful life”, people didn’t buy it, and then he went “okay, let’s try this untitled short story to illustrate the idea”.
For this gwern thing though, I’ve no idea what the failed preceding non-mysterian attempt was.
I haven’t looked into this, but I’m guessing the IQ results are from some form of language barrier?
Many people have tried very hard to find explanations for the IQ results that are something other than “low intelligence” for decades. If a replicating result that provides such an explanation had been established, it would have been broadly publicized in popular media and even laymen would know about it. Instead, we’re being told we are not supposed to look into this topic at all.
It seems like the neologism is mostly capturing the meaning of signal from Shannon’s information theory (which “signal and noise” points towards anyway), where you frame things by having yes/no questions you want to have answered and observations that answer your questions are signals and observations that do not are noise. So if you need to disambiguate, “signal (in the information-theoretic sense)” could be a way to say it.
I’m pretty sure people drifted away because of a more complex set of dynamics and incentives than “Said might comment on their posts” and I don’t expect to see much of a reversal.