I’m still not convinced it’s a good idea to get enlightened, but thanks for the detailed explanation.
clone of saturn
Of course you’re right that there are no perfectly clear bright-line rules that would completely fix these problems, the question is whether there is a clear enough rule that would ameliorate the problems. You would have substituted a judgment call on whether all of Said’s comments across the whole site were on net beneficial, with a much easier judgment call on whether a given note is sufficient or not. And whether Said’s comments were net beneficial was evidently such a close call that you dithered about this decision for literal years, which would seem to indicate that a relatively small nudge would have tipped his contributions to the positive side.
Also, if the door to Said changing his behavior was so completely closed, I’m really confused about what all those hundreds of hours were spent on.
This is a response to asking him to be, in full generality, more tactful or “prosocial,” not to asking him to follow a clear bright-line rule. I’ll grant that Said may not be willing or able to be tactful enough in all situations, yet there seems to be rough consensus that his comments have a lot of value in other situations, so my suggestion would be to try to delineate those situations.
Sure, I don’t mean to imply that Said is beyond reproach, or that all his comments were necessarily good. Just that I think insofar as this post was an attempt to address the reasons Said-defenders felt he needed so much defending, it has failed.
That’s fair enough, but it only demonstrates that he wasn’t willing to unilaterally and proactively do this, not that he wouldn’t have cooperated if you had imposed it on him. It’s baffling to me that you spent hundreds of hours on this issue without (apparently) even attempting to impose a compromise that would have brought out the best in both Said and his detractors.
What you’re doing here is conflating contempt based on group membership with contempt based on specific behaviors. Sneer-clubbers will sneer at anyone they identify as a Rationalist simply for being a Rationalist. Said Achmiz, in contrast, expresses some amount of contempt for people who do fairly specific and circumscribed things like write posts that are vague or self-contradictory or that promote religion or woo. Furthermore, if authors had been willing to put a disclaimer at the top of their posts along the lines of “This is just a hypothesis I’m considering. Please help me develop it further rather than criticizing it, because it’s not ready for serious scrutiny yet.” my impression is that Said would have been completely willing to cooperate. But possible norms like that were never seriously considered because, in my opinion, LW’s issue is not not the “LinkedIn attractor” but the “luminary attractor”. I think certain authors here see how Eliezer Yudkowsky is treated by his fans and want some of that sweet acclamation for themselves, but without legitimately earning it. They want to make a show of encouraging criticism, but only in a kayfabe, neutered form that allows them to smoothly answer in a way that only reinforces their status. And Oliver Habryka and the other mods apparently approve of this behavior, or at least are unwilling to take any effective steps to curb it, which I find very disappointing.
What if The Gift of Pain is true of mental suffering?
Meditators often describe “dissolving pain into vibrations”. When this happens, you still get the sensory inputs that (normally) cause pain. You still take action to prevent the damage they cause to your body. You just don’t create the conscious experience of pain-suffering.
You seem to be missing the point here. Presumably we have the capacity to suffer because it facilitated our survival somehow. How are you so sure you don’t need to hear the message suffering was sending you?
I’m normally in favor of high decoupling, but this thought experiment seems to take it well beyond the point of absurdity. If I somehow found myself in control of the fate of 10^100 shrimp, the first thing I’d want to do is figure out where I am and what’s going on, since I’m clearly no longer in the universe I’m familiar with.
The world is a big place, so there are probably a few people out there who truly abhor all criticism that hurts people’s feelings regardless of who it’s directed at. But in my experience, the vast majority of the time, whether someone perceives a criticism as hurtful or out of bounds depends strongly on whether the perceiver likes, agrees with, or is affiliated with the target of the criticism. To take the atheism example, it seems to me there wasn’t an overall shift away from criticism of deeply-held beliefs in general, but rather a shift in the larger battle lines of the culture war.
On unintentional machiavellianism, see here.
Right, but the problem is the people who believe in astrology (or who work for an astrology company, or whose friends are into astrology, etc.) will say “no, it’s wrong to criticize astrology” and the people who don’t have a stake in astrology will say “yes, it’s okay to criticize astrology” and there’s no neutral arbiter to adjudicate the disagreement. You haven’t gotten anywhere by going up a meta-level because the stakes are still the same.
As for intent, I tend to favor treating intentional and unintentional machiavellianism the same, as doing otherwise just amounts to punishing people for having an accurate self-model, which seems like a bad way to promote truthseeking.
If someone is “optimizing in an objectionable direction” doesn’t that just mean they’re your enemy? And if so, aren’t the valid responses to fight, negotiate, or give up? I don’t understand what you’re concretely expecting to happen in this situation. It seems like you’re expecting the bad guys to surrender just because you explained that they’re bad, but I don’t see what would motivate them to do that.
But most of LLMs’ knowledge comes from the public Web, so clearly there is still a substantial amount of useful content on it, and maybe if search engines had remained good enough at filtering spam fewer people would have fled to Discord.
What if driving the user into psychosis makes it easier to predict the things the user wants to hear?
The cooled indoor air also makes its way outside after not very long though, so this should mostly cancel out over the course of a day, leaving just the power consumption of the AC.
Women are using AI models to create “better” versions of their face and then asking plastic surgeons to make them look like that. So even if the surgery comes out exactly as intended, the effect is to make people look more like AI slop in real life. But apparently AI slop is like that because it’s what the modal person tends to upvote, so a lot of people won’t see any problem.
This usage might originate with Paul Graham around 2002.
I agree that it’s important to optimize our vibes. They aren’t just noise to be ignored. However, I don’t think they exist on a simple spectrum from nice/considerate/coddling to mean/callous/stringent. Different vibes are appropriate to different contexts. They don’t only affect people’s energy but also signal what we value. Ideally, they would zap energy from people who oppose our values while providing more energy to those who share our values.
Case in point, I was annoyed by how long and rambly your comment was and how it required a lot of extra effort to distill a clear thesis from it. I’m glad you actually did have a clear thesis, but writing like that probably differentially energizes people who don’t care.
You could handle both old and new scrapes by moving the content to a different URL, changing the original URL to a link to the new URL, and protecting only the new URL from scraping.
As I see it, a fatal problem with CEV is that even one persistent disagreement between humans leaves the AI unable to proceed, and I think such disagreements are overwhelmingly likely to occur. Adding other sentient beings to the mix only makes this problem even more intractable.EDIT: I should clarify that I’m thinking of cases where no compromise is possible, e.g. a vegan vs. a sadist who derives their only joy from torturing sentient animals. You might say sadists don’t count, but there’s no clear place to draw the line of how selfish someone has to be to have their values disregarded.EDIT 2: Nevermind, just read this comment instead.
Test?