Right. At least some abstract topics should be discussed, and part of the discussion is which, if any, specifics might be exemplary of such abstractions. Other abstract topics should be avoided, if the relevant examples are politically-charged and the abstraction doesn’t easily encompass other points of view.
Choosing to discuss abstracts primarily which happen to support a specific position, without disclosing that tie, is not OK. It’s discussing the specific in the guise of the abstract. I can’t be sure that’s what Zack is doing, but that’s how it appears from my outsider viewpoint.
I find it unpleasant that you always bring your hobbyhorse in, but in an “abstract” way that doesn’t allow discussing the actual object level question.
That’s understandable, but I hope it’s also understandable that I find it unpleasant that our standard Bayesian philosophy-of-language somehow got politicized (!?), such that my attempts to do correct epistemology are perceived as attacking people?!
I note that this isn’t a denial of the accusation that you’re bringing up a hobbyhorse, disguised by abstraction. It sounds more like a defense of discussing a political specific by means of abstraction. I’ve noted in at least some of your posts that I don’t find your abstractions very compelling without examples, and I that I don’t much care for the examples I can think of to reify your abstractions.
It’s at times like this that I’m happy I’m not part of a “rationalist community” that includes repetitive indirection of political fights along with denial that that’s what they are. But I wish you’d keep it off less wrong.
On the next level down, your insistence that words have consistent meaning and categories are real and must be consistent across usages (including both context changes and internal reasoning vs external communication) seems a blind spot. I don’t know if it’s caused by the examples you’re choosing (and not sharing), or if the reverse is true.
Yeah, I comment far more than I post, and I have a general idea that if I get no downvotes, it means I’m optimizing too much for approval rather than exploration of non-obvious ideas. The vote-delta feature makes it a little easier to see the downvotes (I’ve disabled the hiding of negatives), but mostly I have to look and figure out if the average votes-per-voter is particularly low.
Simplicity, repetitiveness, and plenty of “doot”.
I’m not sure it makes sense to put much effort into this kind of gamification. Karma, voting, reactions, etc. are all cost-free, and therefore very weak signals.
It means different things to different people, and it’s so cheap that it’s hard to imagine that a change in text or mechanism will radically change it’s use (though tweaks may be able to moderately reframe things).
As long as voting has the general effect of increasing a sense of involvement, even for those who don’t post/comment, it’s probably worth keeping.
Sorry, I was making a distinction between a lone individual making a claim (“I’m the king, listen to me” or “I have social skills, listen to me”) vs enough OTHER people making the claim (“he’s the king, listen to him”) to give evidence that it’s already accepted. The first is useless, the second is powerful enough to obviate the first.
The coordination case is not directly comparable to the direct claim of authority. Getting many people to perform the ceremonies and to publicly proclaim that one is king is direct evidence of deep influence over at least those people. Claiming to be king is unnecessary if there is already such evidence, and ineffective if there is not.
Multiple people within a group saying it is not a transformation of the claim, it’s direct evidence for the claim.
Agreed—one is not expected to read all that’s been written on a topic. However, one should acknowledge alternate models when they’re pointed out, and your summaries and conclusions will carry a lot more weight if you can explain why you prefer those over the other ideas.
[meta] is variable and strong voting having the desired effect? I use it pretty rarely, and often go back and reduce it to a regular upvote if there are a number of other voters that I don’t want to outweigh. I’ve noticed on my own comments, though, that sometimes a single voter will move a score by 5-10 points, which is nuts for a comment that only has 4 votes and a total of 12 to start with.
Can I opt to just see number of up- and down-votes rather than the hard-to-parse weighted totals, at least for my own posts and comments (the ones I want a signal for)?
I’m very torn. I fully agree with Raemon’s concerns, and might even go further: competent people are rare, and fully goal-aligned people are nonexistent. Looking at accounts from previous times is an existence proof of different equilibria, but does not imply that they’re available today.
And if you look closer, those previous equilibria were missing some features that we hold dear today, such as fairly long periods at the beginning and end of life where economic production isn’t a driving need, some amount of respect for people very different from ourselves, and a knowledge that the current equilibrium isn’t permanent.
The part I’m torn on is that I deeply support experimenting and thinking on these topics, and I very much hope that my predictions are incorrect. This is a case where investing mental energy on a low-probability high-payoff topic seems justified.
Especially for non-short posts, I often read and digest the post before looking at comments. Having a tag or other visible indicator that the mods have considered the question and I shouldn’t be quite as distracted as I seem to be by the potential disruption of a political topic would help me.
I don’t look at frontpage much (my bookmark is allPosts), so I don’t distinguish much between promoted and just normal posts. Maybe that’s a problem too—some indication of endorsement that shows on the post when something is promoted to frontpage.
Push-polling (a survey that purports to collect information, but is actually meant to change the respondents’ behavior) is very clearly in the dark arts. It probably has a place in the world, but it’s not about rationality, it’s about manipulation.
Meta: this topic is important, and general enough to fit within my views of what LessWrong can handle, but it’s getting close to the edge, especially with the bias toward evaluating right-wing as “organized to expropriate” left-wing as “inclusive leftovers, who tend to expropriate as a side-effect”.
I would like there to be a way to clearly label it as “politics-adjacent” or something to acknowledge that we don’t actually want to move the window of what’s appropriate for LW.
Also, one of the keys to winning as a villager is to recognize that there ARE werewolves, and they will interfere in two ways: they’ll gum up your communication and trust, and they will kill you if you are the biggest threat to them.
That said, I’m not sure the game metaphor applies well to real interactions. In reality, everyone is simultaneously a villager and a werewolf (a value producer and a competitor for resources/attention), and we shift between the roles pretty fluidly on different topics in different situations. And there are very real differences in underlying personality and capability dimensions, so there’s always the unknown distinction between “confused villager” and “werewolf”. Combined with the fact that at least some scarcity is not artificial (and even artificial scarcity is a constraint that can last for many years), I have to admit I’m not super hopeful of a solution.
And now that that’s said, I look forward to hearing about the experiments—finding ways to cooperate in the face of scarcity and distinct individual wants is the single most difficult problem in the future of intelligence and flourishing that I know of. I’ll gladly participate if it’ll help, but I highly recommend you start in small groups with more than just internet contact among them. (note: I’m not sure how strongly I recommend this—the scaling problem is real, and what works for a small group with only a few dozen potential coalitions may be near-irrelevant to larger structures with trillions of edges in the graph).
“winning” or “losing” a war, outside of total annihilation, are just steps toward the future vision of galaxies teeming with intelligent life. It seems very unlikely, but isn’t impossible, that simply conceding is actually the best path forward for the long view.
Hmm, maybe I don’t understand the levels. I’d assumed that higher levels include lower ones, rather than denying their existence. It’s absolutely possible, in my world, to recognize that there is a reality, and to still weigh the social appearance against it.
It’s a question of timeframes—if you actually know your utility function and believe it applies to the end of the universe, there’s very little compromise available. You’re going to act in whatever ways benefit the far future, and anything that makes that less likely you will (and must) destroy, or make powerless.
If your utility function only looks out a few dozen or a few hundred years, it’s not very powerful, and you probably don’t know (or don’t have) an actual ideal of future utility. In this case, you’re likely to seek changes to it, because you don’t actually give up much.
I fear you’re assuming a consistency of expectations that leads toward a vastly oversimple model of incentives. Groups value some kinds of diversity, and some instances of deviant competence (unusual things that work out), while punishing some that on many dimensions seem very similar to the things they reward.
Details matter, and path-dependencies abound (where you can’t get from here to there without a viable intermediate which may not be optimal for anything).
I want to be uniquely desirable to my mate and to my employer, to avoid competing on a level field with others, and to avoid the admission that I’m just one of 7.5 billion living humans, and so can’t actually be all that special. I simultaneously want to signal that I’m predictably productive on standard dimensions, and a positive outlier on at least one. Counter-signaling is an important part of this. Showing that I deviate from norms on some (harmless) dimensions is an indicator that I’m competent enough in more important dimensions that I don’t have to care about conformity.
You have to dance in sync for some parts of some songs, and show your unique strengths (which includes showing weaknesses and neutral distinctions as part of counter-signaling) during other parts of the dance. even line dances and highly-prescriptive victorian dances have segments where some amount of freestyle performance is beneficial.
“stronger AI offers weaker AI part of its utility function in exchange for conceding instead of fighting” is the obvious way for AGIs to resolve conflicts, insofar as trust can be established. (This method of resolving disputes is also probably part of why animals have sex.)
Wow, this seems like a huge leap. It seems like an interesting thought experiment (especially if the weaker ALSO changes utility function, so the AIs are now perfectly aligned). But it kind of ignores what is making the decision. If a utility function says it’s best to change the utility function, it was really a meta-function all along.
Remember that in reality, all games are repeated games. How many compromises will you have to make over the coming eons? If you’re willing to change your utility function for the sake of conflict avoidance (or resource gains), doesn’t that mean it’s not really your utility function?
Having a utility function that includes avoiding conflict is definitely in line with cooperating with very different beings, at least until you can cheaply eradicate/absorb them. But no utility function can be willing to change itself voluntarily.
It also seems like there are less risky and cheaper options, like isolationism and destruction. There’s plenty of future left for recovery and growth after near (but not actual) extinction, but once you give up your goals, there’s no going back.
Note that this entire discussion is predicated on there actually being some consistent theory or function causing this uncomfortable situation. It may well be that monkey brains are in control of far more power than they are evolved to think about, and we have to accept that dominance displays are going to happen, and just try to survive them.
All equilibria are a balance of at least two forces. You’re laying out a case for the forces pushing toward privacy/lying/hypocrisy side of social interactions. What are the forces opposing them? Why is anyone trying to tell the truth in the first place?
Are there any advantages to playing the level 1 game, or does this all boil down to “human implies political, get good at it”?