Hey cool. this is the sort of reward I need to enjoy a site enough to use it.
I’m pretty uncomfortable with the tone of this article. The title is a command, and the “epistemic status” label is simply “confident”, and yet the comments have many disagreements I feel are reasonable. Despite that its main point is reasonable as far as I can see, strong downvoted for what I perceive to be bad discourse.
Any news on this? (hey yall front page comment readers)
I’m going to steal this, I’ll probably try to use a continuous relaxation of it and try to break it into causal parts and such
my metric of success: “get rationalists off of facebook”. to do this you need to replace facebook. discord replaces part of it with a much healthier thing, but lesswrong like stuff is needed for the other part.
it’s literally the only thing I use. I basically never click on the post list because they’re all collapsed and on a different page. give me a feeeeeeed
because otherwise people don’t read less wrong because the only things that happen there are people posting overthought crystallized ideas.
is it social if a human wants another human to be smiling because perception of smiles is good?
I wouldn’t say so, no.
good point about lots of level 1 things being distorted or obscured by level 3. I think the model needs to be restructured to not have a privileged instrinsicness to level 1, but rather initialize moment to moment preferences with one thing, then update that based on pressures from the other things
so I’m very interested in anything you feel you can say about how this doesn’t work to describe your brain.
with respect to economics—I’m thinking about this mostly in terms of partially-model-based reinforcement learning/build-a-brain, and economics arises when you have enough of those in the same environment. the thing you’re asking about there is more on the build-a-brain end and is like pretty open for discussion, the brain probably doesn’t actually have a single scalar reward but rather a thing that can dispatch rewards with different masks or something
this would have to take the form of something like, first make the agent as a slightly-stateful pattern-response bot, maybe with a global “emotion” state thing that sets which pattern-response networks to use. then try to predict the world in parts, unsupervised. then have preferences, which can be about other agents’ inferred mental states. then pull those preferences back through time, reinforcement learned. then add the retribution and deservingness things on top. power would be inferred from representations of other agents, something like trying to predict the other agents’ unobserved attributes.
also this doesn’t put level 4 as this super high level thing, it’s just a natural result of running the world prediction for a while.
the better version of this model probably takes the form of a list of the most important built-in input-action mappings.
yeahhhhhh missing TAP type reasoning is a really critical failure here, I think a lot of important stuff happens around signaling whether you’ll be an agent that is level 1 valuable to be around, and I’ve thought before about how keeping your hidden TAP depth short in ways that are recognizeable to others makes you more comfortable to be around because you’re more predictable. or something
Moreover, why should there be discussion? If a post is authoritative, well researched and obviously correct, then the only thing to do is upvote it and move on. A lengthy discussion thread is a sign that either the post is either unclear, incorrect, or has mindkilled its readers.
A lengthy discussion thread is progress.
An authoritative post is one person doing all the work.
A discussion thread is getting help with your initial thoughts.
That is the reason I use Facebook as little as possible, and I would stop interacting with LessWrong entirely if it moved to this format.
I don’t appreciate the participation-threat here and I don’t think it’s reasonable to decide what’s good based on what current users would respond to by abandoning—don’t negotiate with terrorists, etc. I also think you’re conflating things with the endless discussion thing—endless scrolling is super addictive, I agree, but I didn’t mean that, I mean how posts are short form/partially-finished-thoughts by default. I think crap like zvi and ben hoffman post make the bar too high and things need to be shorter and less of a Big Deal. I’d prefer if everything was on one page by default but you had to do explicit paging to prevent severe addictiveness.
Facebook uses a number of nasty evil tricks, such as carefully timing when you get notifications, outright lying about the number of new posts there are to read (on mobile, it always says 9+ EVEN WHEN THERE ARE EXACTLY ZERO BECAUSE YOU UNSUBSCRIBED FROM EVERYTHING), infinite scroll, not showing you all the things your friends post at once so you can only see everything by going back repeatedly, not propagating notification counts between different clients, showing new notification *counts* without refresh but not showing the new notifications themselves without refresh, etc etc. it’s not hard to be less addictive than facebook—it’s the default.
I don’t want high pageviews. I don’t want upvotes. I want discussion. I want a place where people can exchange ideas. I want to take what already exists on rationalist’s facebook walls and move it to lesswrong.
My current thinking about how to implement this without having to build full sized agents is to make little stateful reinforcement learner type things in a really simple agent-world, something like a typed-message-passing type thing. possibly with 2d or 3d locations and falloff of action effects by distance? then each agent can take actions, can learn to map agent to reward, etc.
make other agent’s reward states observable, maybe with a gating where an agent can choose to make its reward state non-observable to other agents, in exchange for that action being visible somehow.
make some sort of game of available actions—something like, agents have resources they need to live, can take them from each other, value being close to each other, value stability, etc etc. some sort of thing to make there be different contexts an agent can be cooperatey or defecty in.
hardcode or preinitialize-from-code level 3 stuff. hardcode into the world identification of which agent took an action at you? irl there’s ambiguity about cause and without that some patterns probably won’t arise
could use really small neural networks I guess, or maybe just linear matrices of [agents, actions] and then mcmc sample from actions taken and stuff?
I’m confused precisely how to implement deservingness… seems like deservingness is something like a minimum control target for others’ reward, retribution is a penalty that supersedes it? maybe?
if using neural networks implementing the power thing on level 3 is a fairly easy prediction task, using bayesian mcmc whatever it’s much harder. maybe that’s an ok place to use NNs? trying to use NNs in a model like this feels like a bad idea unless the NNs are extremely regularized.… also the inference needed for level 4 is hard without NNs.
something that I realized bothers me about this model: I basically didn’t include TAPs reasoning aka classical conditioning, I started from operant conditioning.
also, this explanation fails miserably at the “tell a story of how you got there in order to convey the subtleties” thing that eg ben hoffman was talking about recently.
no, I was thinking of facebook. it needs to be a discussion platform, so it does need length, but basically what I want is “endless comment thread” type deal—a feed of discussion, as you’d get if the home page defaulted to opening to an endless open thread. as it is, open threads quarantine freeform discussion in a way that doesn’t get eyes.
man I’m kind of cranky tonight, sorry about that
I posted it in meta in the first place
Get Less Wrong known as a site where ideas are taken seriously and bullshit is not tolerated
They should ban you for how you’re interacting right now. I don’t know why they’re taking shit with your dodging the issue, but you either don’t have the ability to figure out when someone is correctly calling you out, or aren’t playing nice. Your brand of bullshit is a major reason I’ve avoided less wrong, and I want it gone. I want people to critique my ideas ruthlessly and not critique me as a person with Deservingness at all. if you think being an asshole is normal, go away. you don’t have to hold back on what you think the problems are, but I sure as hell expect you to say what you think the problems are without implying I said them wrong.
0. start with blank file
1. add preference function
2. add time
3. add the existence of another agent
4. add the existence of networks of other agents
I also find the specifics of the method unclear. When he shared it in a lightning talk a few years ago, the point that humans model each other recursively like this was the useful part for me.