“Nono, you have been misled. I *do* have a hero license.”
Emrik
Reading the prelude, I was already thinking about my experiments with deliberately moving my eyes to attend to different things while playing Osu!. Much to my delight, I discover a few paragraphs later that YOU ACTUALLY PLAY OSU!
Anyway, here’s my profile. Feel free to play with me!
Profile: https://osu.ppy.sh/users/18771571
I’m loving this Sequence so far. I’d really like to see a list of all the concrete norm innovations you can think of that you’d like to see tried in the community. I realise some norms aren’t very concrete and easy to put down on paper, but I’d like an as-comprehensive-as-possible list anyway.
One for the list: Impact certificates.
The Paradox of Expert Opinion
This is a good point in the sense that communication between the researchers could in theory make all of them converge to the same beliefs, but it assumes that they all communicate absolutely every belief to everyone else faster than any of them can form new beliefs from empirical evidence.
But either way it’s not a crux to the main ideas in the post. My point with assuming they’re perfectly rational is to show that there are systemic biases independent of the personal biases human researchers usually have.
(Edit: It was not I who downvoted your comment.)
I’m not sold yet on why any of the examples are bad?
I know very little of string theory, so maybe that’s the one I think is most likely to be a bad example. I assume string theorists are selected for belief in the field’s premises, whether that be “this math is true about our world” or “this math shows us something meaninfwl”. Physicists who buy into either of those statements are more likely to study string theory than those who don’t buy them. And this means that a survey of string theorists will be biased in favour of belief in those premises.
I’m not talking inside view. It doesn’t matter to the argument in the post whether it is unreasonable to disagree with string theory premises. But it does matter whether a survey of string theorists will be biased or not. If not, then that’s a bad example.
A central question related to this post is “which reference class should you use to answer your question?” A key point is that it depends on how much selection pressure there is on your reference class with respect to your query.
Right, good point. Edited to point out that same priors (or the same complexity measure for assigning priors) is indeed a prerequisite. Thanks!
The underappreciated value of original thinking below the frontier
This was surprisingly enlightening. I’ve previously had mostly a negative attitude towards professionalism and its inflexibility, and I’ve been scared of it excessively infiltrating EA. But the idea of professionalism as “presenting consistent APIs” is really compelling.
“I can move my mind so it is as though I’ve never seen a water bottle before”
I liken this to one of my favourite concepts, shoshin—”a beginner’s mind”. Entering a state of shoshin requires perceptual dexterity.
One of the problems it tries to overcome, and which you describe in different words, is the Einstellung effect—when your perception of a problem is stuck in some way. And that’s one of the reasons perceptual dexterity is so important in original research (and especially math & philosophy).
Two Prosocial Rejection Norms
OK, granted. It was a silly example. Replace with “Hey, wanna hang out?” or some other invitation.
Consider what norms are better on the margin. I can’t change what other people decide to feel, but I can change how upfront I am about my willingness to reject, and I can change how I word my invitations to make them safer to reject.
Think of the norms I’m proposing as cheap social interventions for mental health. You can say 2-4 are misplacements of responsibility. They’re symptoms of an overactive anxiety. But I think there’s a limit to how much we should care about where responsibility lies when considering how to behave in order to bring happiness. I think taking the anxieties into account (by being upfront with willingness to reject, and helping others reject you if they wish) can improve our relationships and communities regardless.
The reason I say that I’m not worried about rejection, is that I am worried about their fear of rejecting. I personally have anxieties about 3 and 4, which makes proclaiming my lack of fear 1 the logical thing to do.
But you make a good point: Some people are likely to misunderstand what I try to communicate, and end up concluding that I actually fear rejection.
Using “wanna be friends?” as an example was me trying to be cutesy and it seems to have been confusing. But I disagree that just saying “I enjoy your company” doesn’t have the same rejection fears dynamic. Statements are often veiled invitations or requests, so if you mean it to be nothing but a statement, it can sometimes be helpfwl to first clarify that it isn’t an invitation/request.
This is confusing but seems valuable to try to understand. Do you mean that if I say
“Would you like to talk for a bit? Please say no if you’d actually prefer doing something else, and I’m cool with that. I only wish to hang out if it’s mutually beneficial. :)”
...I’m somehow stating a self-deception out loud?
This comment is excellent and I would give it more upvote if I could!
I like this point too:
Just from an information coding perspective, the length of this utterance communicates, “I consider this to be a complicated circumstance requiring extra care in order not to go badly”
I don’t wholeheartedly agree with everything you say here, but I updated the post to point out the risk of putting people “on the spot” .
Huh, this doesn’t have a bajillion upvotes? I’ve seriously been thinking about this since it was originally posted. I’ve been revealing to people that I’m an empathetic oofer, so they shouldn’t worry about causing awkwardness by what they do as long as they feel ok doing it. I’ve been more carefwl around people whom I model as intrinsic oofers because I realise that I can’t just ask people to adopt my culture unless they’re intrinsically somewhat like me.
Seeking: A program/framework to build models that can test social-epistemological hypotheses. Basically, I want to be able to run a visual simulation of little circles in a web where I code in the rules that determine how the circles interact with each other. I imagine something similar has been used in evolutionary simulations, but I don’t know where to find it.