Feature Suggestion: add a number to the hidden author names.
I enjoy keeping the author names hidden when reading the site, but find it difficult to follow comment threads when there isn’t a persistent id for each poster. I think a number would suffice while keeping the hiddenness.
Drake Morrison
This has unironically increased the levels of fun in my life
If you already have the concept, you only need a pointer. If you don’t have the concept, you need the whole construction. [1]
- ^
Related: Sazen and Wisdom Cannot Be Unzipped
- ^
Yay! I’ve always been a big fan of the art you guys did on the books. The Least Wrong page has a sort of official magazine feel I like due to the extra design.
Similar to Wisdom cannot be Unzipped.
Completed the survey. I liked the additional questions you added, and the overall work put into this. Thanks!
Oh, got it.
I mean, that still sounds fine to me? I’d rather know about a cool article because it’s highly upvoted (and the submitter getting money for that) than not know about the article at all.
If the money starts being significant I can imagine authors migrating to the sites where they can get money for their writing. (I imagine this has already happened a bit with things like substack)
You get money for writing posts that people like. Upvoting posts doesn’t get you money. I imagine that creats an incentive to write posts. Maybe I’m misunderstanding you?
non.io is a reddit clone that costs 1$ to subscribe, and then it splits the money towards those users you upvote more of. I think it’s an interesting idea worth watching.
Maybe? I’ve not played it all that much, honestly. I was simply struck by the neat way it interacted with multiple players.
I think it could be easily tweaked or houseruled to be a peavewager game by just revealing all the hidden information. Next time I play I’ll probably try it out this way.
War of Whispers is a semi-cooperative game where you play as cults directing nations in their wars. The reason it’s cooperative is because each player’s cult can change the nation they are supporting. So you can end up negotiating and cooperating with other players to boost a particular nation, because you both get points for it.
Both times I’ve played people started on opposite sides, then ended up on the same or nearly the same side. In one of the games two players tied.
There is still the counting of points so it doesn’t quite fit what you are going for here, but it is the closest game I know of where multiple players can start negotiating for mutual aid and both win.
I think this is pointing at something real. Have you looked at any of the research with the MDA Framework used in video game development?
There are lots of reasons a group (or individual) goes to play a game. This framework found the reasons clustering into these 8 categories:the tactile senses (enjoying the shiny coins, or the clacking of dice)
Challenge (the usual “playing to win” but also things like speedrunners)
Narratives (playing for the story, the characters and their actions)
Fantasy (enjoyment of a make-believe world. Escapism)
Fellowship (hanging out with your buds, insider jokes, etc.)
Discovery (learning new things about the game, revealing a world and map, metroidvania-style games)
Expression (spending 4 hours in the character creation menu)
Abnegation (cookie cutter games, games to rest your mind and not think about things)
The categories are not mutually exclusive by any means, and I think this is pointing at the same thing this post is pointing at. Namely, where the emotional investment of the player is.
oh, that’s right. I keep forgetting the LessWrong karma does the weighing thing.
Has anyone tried experimenting with EigenKarma? It seems like it or something like it could be a good answer for some of this.
I think this elucidates the “everyone has motives” issue nicely. Regarding the responses, I feel uneasy about the second one. Sticking to the object level makes sense to me. I’m confused how psychoanalysis is supposed to work without devolving.
For example, let’s say someone thinks my motivation for writing this comment is [negative-valence trait or behavior]. How exactly am I supposed to verify my intentions?
In the simple case, I know what my intentions are and they either trust me when I tell them or they don’t.
It’s the cases when people can’t explain themselves that are tricky. Not everyone has the introspective skill, or verbal fluency, to explain their reasoning. I’m not really sure what to do in those cases other than asking the person I’m psychoanalyzing if that’s what’s happening.
Someone did a lot of this already here. Might be worth checking their script to use yourself.
I think what you are looking for is prediction markets. The ones I know of are:
Manifold Markets—play-money that’s easy and simple to use
Metaculus—more serious one with more complex tools (maybe real money somehow?)
PredictIt—just for US politics? But looks like real money?
I don’t see all comments as criticism. Many comments are of the building up variety! It’s that prune-comments and babble-comments have different risk-benefit profiles, and verifying whether a comment is building up or breaking down a post is difficult at times.
Send all the building-comments you like! I would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations.
The benefits of building-comments is easy to get in 3 comments per day per post. The risks of prune-comments(spawning demon threads) are easy to mitigate by only getting 3 comments per day per post.
Are we entertaining technical solutions at this point? If so, I have some ideas. This feels to me like a problem of balancing the two kinds of content on the site. Balancing babble to prune, artist to critic, builder to breaker. I think Duncan wants an environment that encourages more Babbling/Building. Whereas it seems to me like Said wants an environment that encourages more Pruning/Breaking.
Both types of content are needed. Writing posts pattern matches with Babbling/Building, whereas writing comments matches closer to Pruning/Breaking. In my mind anyway. (update: prediction market)
Inspired by this post I propose enforcing some kind of ratio between posts and comments. Say you get 3 comments per post before you get rate-limited?[1] This way if you have a disagreement or are misunderstanding a post there is room to clarify, but not room for demon threads. If it takes more than a few comments to clarify that is an indication of a deeper model disagreement and you should just go ahead and write your own post explaining your views. ( as an aside I would hope this creates an incentive to write posts in general, to help with the inevitable writer turn-over)
Obviously the exact ratio doesn’t have to be 3 comments to 1 post. It could be 10:1 or whatever the mod team wants to start with before adjusting as needed.
- ^
I’m not suggesting that you get rate-limited site-wide if you start exceeding 3 comments per post. Just that you are rate-limited on that specific post.
- ^
If you can code, build a small AI with the fast.ai course. This will (hopefully) be fun while also showing you particular holes in your knowledge to improve, rather than a vague feeling of “learn more”.
If you want to follow along with more technical papers, you need to know the math of machine learning: linear algebra, multivariable calculus, and probability theory. For Agent Foundations work, you’ll need more logic and set theory type stuff.
MIRI has some recommendations for textbooks here. There’s also the Study Guide and this sequence on leveling up.
3blue1brown’s Youtube has good videos for a lot of this, if that’s the medium you like.
If you like non-standard fiction, some people like Project Lawful.
At the end of the day, it’s not a super well-defined field that has clear on-ramps into the deeper ends. You just gotta start somewhere, and follow your curiosity. Have fun!