Maybe consider taking votes on some things, writing prediction markets on what the votes will be at the end of the session.
Honestly looking back I can’t recall how I meant it. Thanks for the critique.
Should people who got funding from immoral activity pay it back?
The original post on the forum was written by multiple people, hence the use of “our”
I made some edits.
In the past, we’ve often seen that officers in charge of nukes don’t launch them (Archipov, the Generals in the Trump admin seemed minded to ensure they got a look in before launch). What probability do you assign that when told to launch a nuke, the Russian officers don’t?
What % are the 70% that Ukraine doesn’t get nuked composed of?
I think Russia has already suffered quite a lot of embarrassment and hasn’t nuked yet. According to your model, why not?
The article I was looking for was by John, so when he linked one, I assumed it was his. Funny. https://www.lesswrong.com/posts/Psr9tnQFuEXiuqGcR/how-to-write-quickly-while-maintaining-epistemic-rigor
EditedNo, actually it was one you wrote, thanks so much for writing it. Really useful to me.Originally I thought John was linking to his article that someone else had already sent me, so I didn’t check it. I haven’t read Elizabeth’s article yet.
If you or a partner have ever been pregnant and done research on what is helpful and harmful, feel free to link it here and I will add it to the LessWrong pregnancy wiki page. https://www.lesswrong.com/tag/pregnancy
I retract my criticism.
Sure, but in the unlikely event that a high karma user had blown up the site immediately wouldn’t you have known their identity and broken your word? If anything I’d like you to not promise and instead say “it’s unlikely we’ll know”.
I think you should remove the code which checks which users pressed the button but didn’t enter the code. That seems not in the spirit of the game.
No, they don’t let me post to their wiki like this and I haven’t had enough success to publicise it yet.
How do we allow people to read some set of estimation functions on different problems?
If you want to know what an estimate of AI risk is, how do you see people’s simple models (estimation functions) of this?
Why did you do squiggle when it doesn’t seem to deal with this estimation functions problem
Why not build a python package?
I like that the LessWrong team is trying to iterate on this holiday. I like the theme of Petrov day and even if this isn’t the perfect implementation, I like that they are pushing through. I think that the small chance of one day having a really broadly-accepted Petrov day is better than 50% chance of losing LessWrong 1 day a year for the next 10 years.