The key word on the above answer being “optimal”. It seemed to me like the post was saying “here’s one thing you can pay attention to to optimize your learning.” and you were replying “But I don’t pay attention to that and can still do learning.” which is essentially arguing against a point that the original post never made.
I think there’s an inferential distance step I’m missing here, because I’m actually a bit at a loss as to how to relate my post to empiricism.
See also You Don’t Get To Know What You’re Fighting For, which makes this sort of situation more explicit.
I don’t think “reasonable” is the correct word here. You keep assuming away the possibility of conflict. It’s easy to find a peaceful answer by simulating other people using empathy, if there’s nothing anyone cares about more than not rocking the boat. But what about the least convenient possible world where one party has Something to Protect which the other party doesn’t think is “reasonable”?
Yes, if someone has values that are in fact incompatible with the culture of the organization, they shouldn’t be joining that organization. I thought that was clear in my previous statements, but it may in fact have not been. If every damn time their own values are at odds with what are best for the organization given its’ values, that’s an incompatible difference. They should either find a different organization, or try the archipeligo model. There are such thing as irreconcilable value differences.
I don’t think the OP is compatible with the shared values and culture established in Sequences-era Overcoming Bias and Less Wrong.
I agree. I think when that culture was established, the community was missing important concepts about motivated reasoning and truth seeking and chose values that were in fact not optimized for the ultimate goal of creating a community that could solve important problems.
I think it is in fact good to experiment with the norms you’re talking about from the original site, but I think many of those norms originally caused the site to decline and people to go elsewhere. Given my current mental models, I predict a site that uses those norms to make less intellectual progress than a similar site using my norms although I expect you to have the opposite intuition. As I stated in the introduction, the goal of this post was simply to make sure that those mental models were in discourse.
Re your dialogue: The main thing that I got from it was that you think a lot of the arguments in the OP are motivated reasoning and will lead to bad incentives. I also got that this is a subject you care a lot about.
If someone attempts to give equal weight to their own needs, the meds of their interlocutor, and the needs of the forum as a whole—how do we know whether they’ve got a reasonable answer? Does that just have to be left to moderator discretion, or?
Yes basically, but if the forum were to take on this direction, the idea would be to have enough case examples/explanations from the moderators about WHY they made that discretion to calibrate people’s reasonable answers. See also this response to Zach which goes more into details about the systems in place to calibrate people’s reasonable answers.
I think that this and your original comment seem to kind of...be talking to a different post or something?
Like it didn’t seem like the original post was at all about being able to get things done, but more about optimizing learning.
I think I was viewing “cultural wisdom’ as basically its’ own blackbox model, and in practice I think this is basically how I treat it.
Nitpick: Human’s are definitely creating models at 12, and able to understand that what they’re creating are models.
I’m having a hard time figuring out how the examples of evolution and markets map on to the agent above. They’re both processes that seem to take advantage of the fact that there are agents inside them trying to optimize goals, and are the results of these agents and processes coming into conflict with each other and reality. While I can imagine the unselfaware agent simulating these instances, I don’t see a way to actually view it as a process made up of agents trying to achieve goals.
I really enjoyed this, I especially liked the part about the stages and where learners are likely to get stuck. I personally really related to the description of the unproductive approach to proficiency, and think I’ve probably got quite a few skills stuck at proficiency due to the unproductive moods mentioned. Knowing that I can cultivate the moods of ambition, resolve, and patience to move forward with these skills feels like it could be really useful.
start with vague cultural wisdom that seems to work and slowly replace it with more robust things as you gain skills that enable you to do so.′
I think the thing I actually do here most often is start with a bunch of incompatible models that I learned elsewhere, then try to randomly apply them and see my results. Over time I notice that certain parts work and don’t, and that certain models tend to work in certain situations. Eventually, I examine my actual beliefs on the situation and find something like “Oh, I’ve actually developed my own theory of this that ties together the best parts of all of these models and my own observations.” Sometimes I help this along explicitly by introspecting on the switching rules/similarities and differences between models, etc.
This feels related to the thing that happens with my moral intuitions, except that there are internal models that didn’t seem to come from outside or my own experiences at all, basic things I like and dislike, and so sometimes all these models converge and I still have a separate thing that’s like NOPE, still not there yet.
I think philosophical bullet biting is usually wrong. It can be useful to make a theory that you KNOW is wrong, and bite a bullet in order to make progress on a philosophical problem. However, I think it can be quite damaging to accept a practical theory of ethics that feels practical and consistent to you, but breaks some of your major moral intuitions. In this case I think it’s better to go “I don’t know how to come up with a consistent theory for this part of my actions, but I’ll follow my gut instead.”
Note that this is the opposite of becoming a robust agent. However, the alternative is CREATING a robust agent that is not in fact aligned with its’ creator. I’ve seen people who adopted a moral view for consistency, and now make choices that they NEVER would have endorsed before they chose to bite bullets for consistency.
I think this is one of my major disagreements with Raemon’s view of becoming a robust agent.
I don’t see how anyone is supposed to compute that.
I don’t see how anyone is supposed to compute that.
If your primary metaphor for thought is simple computations or mathematical functions, I can see how this would be very confusing, but I don’t think that’s actually the native architecture of our brains. Instead our brain is noticing patterns, creating reusable heuristics, and simulating other people using empathy.
When you look at the question using that native architecture, it becomes relatively simple to find a reasonable answer. This is the same way that we regularly find solutions to complex negotiations between multiple parties, or plan complex situations with multiple constraints, even though many of those tasks are naively uncomputable. The shared values and culture serve to make sure those heuristics are calibrated similarly between people.
Yes, I get the argument, but am unsure of how Romeo sees it relating to this post.
Do you find that you don’t have different states or moods?
Yes that seems like a decent summary
I’m holding the frame you wrote on your shortform feed re defensiveness for a bit to see how I feel about it.
> I’m not a fan of that definition. It’s equating “feelings of safety” with “actual safety”
I agree with this, but it’s quite a mouthful to deal with. And I think “feelings of safety” are actually more important for truthseeking and creating a product—they’re the things that produce defensiveness, motivated reasoning, etc.
I think mr-hire thinks the important success condition is that people feel safe and that it’s important to design the space towards this goal, with something of a collective responsibility for the feelings of safety of each individual.
This seems rightish- but off in really important ways that I can’t articulate. It’s putting the emphasis on the wrong things and “collective responsiblity” is not an idea I like at all. I think I’d put my stance as something like “feeling unsafe is a major driver of what people say and do, and good cultures provide space to process and deal with those feelings of unsafety”
This definition can also put a lot of the power in the hands of those who are having a reaction. If we all agree that our conversation must be safe, and that any individual can declare it unsafe because they are having a reaction, this gives a lot power to individuals to force attention on the question of safety (and I fear too asymmetrically with others being blamed for causing the feelings of uncertainty).
Note that this issue is explicitly addressed in the original dialogue. If someones feelings are hurting the discourse, they need to take responsibility for that just as much as I need to take responsibility for hurting their feelings. No one is agreeing that all conversations must be safe for all people, but simply that taking into account when people feel unsafe is important.
I think it’s best defined by its’ antonym. Unsafety, in this context, would mean anything that triggers a defensive or reactive reaction. Just like how bodily unsafety triggers fear, agression, etc, there are psychological equivalents that trigger the same reaction.
Safety is when a particualr circumstance doesn’t trigger that reaction, OR alternatively there could be a meta safety (AKA, having that reaction doesn’t trigger that reaction, because it’s ok).
I think your bolded definitions of safe would actually be served by changing to the word allowed, which for many people correlates quite closely with their feeling of safety.
So it’s unclear to me which arguments you’re referring to, but I think you might be saying something like
“The reason its’ important to focus on needs is that if we don’t, it causes people to make convincing logical arguments that are actually about their needs”
However, you could also be saying “This post is a logical argument and convincing, but that doesn’t make it true.”
Or possibly “A culture that’s focused on discussion to find truth isn’t that useful, and we should be focusing more on things like empiricism.”
I’m curious what it is you’re trying to point at here.
But lately I seem to be seeing a lot of arguments of the form, “Ah, but we need to coordinate in order to create norms that make everyone feel Safe, and only then can we seek truth.” And I just … really have trouble taking this seriously as a good faith argument rather than an attempt to collude to protect everyone’s feelings?
I want to address something that I think is quite important in the context of this post, because I think you’re pattern matching the “let’s make a space where people’s needs are addressed,” to the standard social justice safe space, but there are actually 3 types of safe spaces, and the one you’re imagining is not related to the ones this post is talking about.
The social justice kind, where nobody is allowed to bring up arguments that make you feel unsafe, is the one you’re talking about. “We need to make everyone feel safe and can’t seek truth until we do that” is describing an environment where truth seeking is basically impossible. I think private spaces like that are important in a rationalist environment, because some people are fragile and need to heal before they can participate in truth seeking, but are almost never right for an organization that has the goal of seeking truth.
Then there’s the kind that this post is talking about. In this type of environment, it’s safe to say “This conversation is making me feel unsafe, so I need to leave”. It’s also safe to say “It feels like your need for safety is getting in the way of truthseeking” as well as for other people to push back on that if they think that this person’s need for safety is so great in this moment that we need to accommodate them for a bit and return to this topic later. I think the majority of public truth-seeking spaces would be served by adopting this type of safety, in lieu of something like Crocker’s rules.
Then there’s the third type of safe space. In this type of safe space, you can say “This topic is making me feel unsafe” and the expected response is “Awesome, then we’re going to keep throwing you in as many situations like this as possible, poke that emotional wound, and help you work through it so you can level up as an individual and we can level up as an organization.” In this case, the safety comes from the strict vetting procedures and strong culture that let you know that people poking your are sincere and skilled, and the people being poked have the emotional strength to deal with it. I think that a good majority of PRIVATE truth seeking spaces should strive to be this third type of safe space.
One of the mistakes I made in this post was conflate the second and third types of safe spaces, so for instance I posited a public space that also had radical transparency, which is really only a tool you should use in a culture with strong vetting. However, I definitely was not suggesting the first type of safe space, but I get the impression that that’s what you keep imagining.