No, they don’t let me post to their wiki like this and I haven’t had enough success to publicise it yet.
How do we allow people to read some set of estimation functions on different problems?
If you want to know what an estimate of AI risk is, how do you see people’s simple models (estimation functions) of this?
Why did you do squiggle when it doesn’t seem to deal with this estimation functions problem
Why not build a python package?
I like that the LessWrong team is trying to iterate on this holiday. I like the theme of Petrov day and even if this isn’t the perfect implementation, I like that they are pushing through. I think that the small chance of one day having a really broadly-accepted Petrov day is better than 50% chance of losing LessWrong 1 day a year for the next 10 years.
I bought 1000 yes. Will donate/waste profits if someone says how I can do it without it having further bad impacts.
If happy to pledge to waste profits if I get them if there is a way that can’t be broken.
It’s a shame the wiki doesn’t support the draft google-docs-like editor. I wish I could make in-line comments while writing.
I bought a load of yes, to try and drive the price up and incentivise people to sell their yes. Curious whether this was the right call.
My workflow is that I want to be able to draft with any user, maybe across months. Ideally I’d be able to link my drafts publicly while I’m working on them. I’ve tried this in many ways over the years. Currently I am trying:
the docs linked from my website—https://nathanpmyoung.com/
iterating on better wiki pages, eg https://www.lesswrong.com/tag/effective-altruism
How on earth have you got 8 LW upvotes in this short time?
We are all using this notation on LW. So while I agree that if you are poor you have to be more careful if you are rich you can’t say what you want. It isn’t only one person who can call you out.
Likewise, if these go from being 1-1 bets to being markets, then many of your criticisms become smaller. Individuals have different utitlity on money, sure, but with enough liquidity I guess this comes out in the waysh.
I sort of still think even with all these criticisms it’s fine and useful. Yeah so some people will have to give lower numbers. shrug. At least we’ll have an imperfect way of denoting something. Feel free to give a better suggestion.
While you are technically right I think that some of this will even out.
If you have lots of wealth, are stupid and offer bad bets then LessWrong has enough money to slowly take your money off you.
I don’t understand the risk aversion point
Your counterparty being well informed is exactly why you should have shallow pools at higher confidence and deeper pools at lower confidences. This is a feature, not a bug.
I don’t understand the coin flip example
I haven’t come with a way for going backward and forwards on discussions ie in the criticisms tag
The way prediction markets handle this is that there is a limited liquidity at each price. So you offer $100 at 80%, $1000$ at 70%, $10k at 65%, etc. If you said “I’m willing to bet at 90% and someone said, okay I’ll bet you $1M you could say no without being a fraud.
I wonder how that works in regard to credal resistance somehow where you could put depth of money you will either bet on or retract the statement
Biden will win the presidency $50 [45%] $1000 [20%] $10,000 [10%]
Yeah this feels good.
I think the notation here feels unintuitive. I don’t think I’d guess what it means from reading it. perhaps: 1 day 80% [5,20], lifetime 80% [2.5, 40] though as I say in the other comment that just feels like a different confidence interval.
This % chance of change should fold back into your original forecast, but I like that there is something signalling the depth of your confidence. Though it’s unclear to me if confidence intervals suggest this notation already. If you had less chance of moving your interval, then it would already be a smaller interval, right?
I’m interested. In some way it’s a lot like liquidity in a market. You are saying you’ll buy $100 at 90% then $200 at 80% etc. Someone can’t just come in and force you to bet $1m at 90%. You’d think they had more information than you.
I mainly write on the EA forum, but I’d like to see articles which are in the editing mode all the time—ie anyone can edit. I wonder how big a jump that is from this.I’ve written about it here https://forum.effectivealtruism.org/posts/NxWssGagWoQWErRer/community-posts-a-new-forum-post-type-unofficial-pr-faq?commentId=oJJEn7FLo8uZEnRNF
As I said above I struggled to follow the article and now can’t be bothered to reread it.But I agree that he disagrees with his previous conduct.Feels like “I disagree with you but went about it the wrong way” is something we’d welcome from those who disagree with us, right?
I will try to explain what I know. I guess 90% accuracy on individual points so some of it will be wrong. Overview: I think Weyl was going on a process of changing his mind for a year or two. Remmelt and I have both and conversations with him. I imagine there are more conversations and maybe some some deep process we can’t see.I’ve talked to Weyl for an hour or so on twitter 3 or 4 times. I liked his book and like him personally, so spent some time teasing out his thoughts whenever I thought he was being unfair. eg here https://twitter.com/NathanpmYoung/status/1374308591709138948Iirc I’d lightly pushed for a while for him to A) talk to some actual rationalists and B) Send documents with criticisms to ratinalists directly rather than post them as open letter. I think a document posted by Weyl to here would get a sober response. I’ve always felt Weyl was a sincere person, even if we disagreed and cares about AI risk etc. Also I genuinely like him, which makes it easier.Four months ago, he wrote this https://twitter.com/glenweyl/status/1423686528190980097 “I have [thought about writing on LessWrong] but I am worried I would get the tone wrong enough that it would be a net harm. @RemmeltE has kindly been trying to mentor me on this.and later to me https://twitter.com/glenweyl/status/1424366991792513024″Thanks for being so persistent with me about this. I do genuinely think that you’re basically right that my behavior here has been fundamentally hateful and against my principles, driven by feelings of guilt/shame and counterproductive to my own goals. I hope to have time Before going out on paternity leave to post an apology on LessWrong”To me it felt as if he had a culturally different approach to AI risk than rationalists (he wants to get more people involved, and likes redistributing wealth and power) and also there was maybe hurt. This led him (in my opinon) to overextend in his criticisms, mingling what I thought were fair and unfair commentary. The article he shared here I thought was unfair and didn’t deserve Weyl’s support. I guess I hoped he might change his mind, but I was still surprised when it happened (which makes me wonder if there were other things going on). I was particularly surprised by the strength of the first and this subsequent apology.Some thoughts suggestions:- I found the apology article a bit hard to follow—I read it a couple of hours ago and I’m not sure I could explain it now- Weyl seems to have done exactly what the rationalist part of me would want from him. If anything, it might be too much. I hope people are gracious to him for this. It probably cost him time, emotional energy, pride and possible the respect of some others.- I still wonder what led to him being so averse to rationalism in the first place.- I’d suggest if you’re interested you thank him for the apology and talk to him on the subject.I’ve struggle to write this accurately and non-arrogant/humbly so apologies if I’ve overcooked. Thanks to Neel for suggesting I give my thoughts.