I thought you were threatening extortion. As it is, given that people are being challenged to uphold morality, this response is still an offer to throw that away in exchange for money, under the claim that it’s moral because of some distant effect. I’d encourage you to follow Jai’s example and simply delete your launch codes.
This seems extremely unprincipled of you :/
Agreed @ the differences not being that great. I’ve heard this model around for a while, and I feel like while it does describe a distinction, that distinction is not clean in the territory.
I think a lot of people in the world in general actually live much more in a mindset where concrete physical thinking is real than it might seem! The problem as I see it is, people’s causal calibration level varies, and people’s impression of their own ability to have their own beliefs about a topic without it embarrassing them varies. The “social reality” case is what you get when someone focuses most or all of their attention on interacting with people and don’t have anything hard in their lives, so they simply don’t need to be calibrated about physics and must rely on others’ skill in such topics.
But I don’t think nearly any neuroplastic human is going to be so unfamiliar with [edit: hit submit while trying to put my cursor back! continuing writing...]
… unfamiliar with causal reality that they can’t comprehend the necessity of basic tasks. They might feel comfortable and safe and therefore simply not think about the details of the physics that implements their lives, but it’s not a case of there being a social reality that’s a separate layer of existence. It’s more like the social behavior is what you get when people don’t have the emotional safety and spare time and thinking to explore learning about the physics of their lives.
does that seem accurate to y’all? what do you think?
I agree with this in some ways! I think the rationality community as it is isn’t what the world needs most, since putting effort into being friendly and caring for each other in ways that try to increase people’s ability to discuss without social risk is IMO the core thing that’s needed for humans to become more rational right now.
IMO, the techniques are relatively quite easy to share once you have trust to talk about them, and merely require a lot of practice, but convincing large numbers of people that it’s safe to think things through in public without weirding out their friends seems to me to be likely to require making it safe to think things through in public without weirding out their friends. I think that scaling a technical+crafted culture solution to creating emotional safety to discuss what’s true, that results in many people putting regular effort into communicating friendliness toward strangers when disagreeing, would do a lot more than scaling discussion of specific techniques for humanity’s rationality.
The problem as I see it right now is that this only works if it is seriously massively scaled. I feel like I see the reason CFAR got excited about circling now—seems like you probably need emotional safety to discuss usefully. But I think circling was an interesting thing to learn from, not a general solution. I think we need to design an internet that creates emotional safety for most of its users.
Thoughts on this balance, other folks?
My own thoughts on the topic of ai, as related to this:
I currently expect that the first strongly general AI will be trained very haphazardly, using lots of human knowledge akin to parenting, and will still have very significant “idiot-savant” behaviors. I expect we’ll need to use a similar approach to deepmind’s starcraft AI for the first version: that is, reaching past what current tools can do individually or automatically, and hacking them together in a complex training system built for the specific purpose. However, I think at this point we’re getting pretty close from the capabilities of individual components. If a transformer network was the only module in an system, but the training setup produced training data that required the transformer becoming a general agent, I currently think it would be capable of the sort of abstracted variable-based consequential planning that MIRI folks talk about being dangerous.
I strongly agree with this point. This is the core reason I have mostly stopped using less wrong. I just made a post, and being able to set my own moderation standards is kind of cool. That might make less wrong worth of use as a blog, actually.
eliezer’s problem is what you have if your friend group is getting diluted. this problem is what you have if you’re trying to dilute your friend group as much as you can.
Hey cool. this is the sort of reward I need to enjoy a site enough to use it.
I’m pretty uncomfortable with the tone of this article. The title is a command, and the “epistemic status” label is simply “confident”, and yet the comments have many disagreements I feel are reasonable. Despite that its main point is reasonable as far as I can see, strong downvoted for what I perceive to be bad discourse.
Any news on this? (hey yall front page comment readers)
I’m going to steal this, I’ll probably try to use a continuous relaxation of it and try to break it into causal parts and such
my metric of success: “get rationalists off of facebook”. to do this you need to replace facebook. discord replaces part of it with a much healthier thing, but lesswrong like stuff is needed for the other part.
it’s literally the only thing I use. I basically never click on the post list because they’re all collapsed and on a different page. give me a feeeeeeed
because otherwise people don’t read less wrong because the only things that happen there are people posting overthought crystallized ideas.
is it social if a human wants another human to be smiling because perception of smiles is good?
I wouldn’t say so, no.
good point about lots of level 1 things being distorted or obscured by level 3. I think the model needs to be restructured to not have a privileged instrinsicness to level 1, but rather initialize moment to moment preferences with one thing, then update that based on pressures from the other things
so I’m very interested in anything you feel you can say about how this doesn’t work to describe your brain.
with respect to economics—I’m thinking about this mostly in terms of partially-model-based reinforcement learning/build-a-brain, and economics arises when you have enough of those in the same environment. the thing you’re asking about there is more on the build-a-brain end and is like pretty open for discussion, the brain probably doesn’t actually have a single scalar reward but rather a thing that can dispatch rewards with different masks or something
this would have to take the form of something like, first make the agent as a slightly-stateful pattern-response bot, maybe with a global “emotion” state thing that sets which pattern-response networks to use. then try to predict the world in parts, unsupervised. then have preferences, which can be about other agents’ inferred mental states. then pull those preferences back through time, reinforcement learned. then add the retribution and deservingness things on top. power would be inferred from representations of other agents, something like trying to predict the other agents’ unobserved attributes.
also this doesn’t put level 4 as this super high level thing, it’s just a natural result of running the world prediction for a while.
the better version of this model probably takes the form of a list of the most important built-in input-action mappings.