That’s not how I understand it literally. You don’t have to put it to the side/into some savings account. You just have to accept the risk that if you have to pay out in the unlikely case, you have to go into debt.
Yeah for some reason people come up with this absurd complicated mechanism for prediction bets that they don’t apply to pretty much any other form of debt, don’t know why this keeps happening but I’ve seen it elsewhere too.
Or take the risk that you’d feel bad by just … not paying. This is the one which should worry your counterparty, and which leads to escrow requirements.
Assuming the OP only accepts bets with accounts linked to a real world identity, or pseudonymous accounts with a very high reputation, such as gwern, I think it’s safe enough to not require an escrow.
Why would someone who’s built up a reputation in the LW/rationalist/etc. community wreck it, publicly and on-the-record, over <$50k USD?
They’d be sacrificing way more in future potential since no one will willingly work with a scoundrel.
Why would someone who’s built up a reputation in the LW/rationalist/etc. community wreck it, publicly and on-the-record, over <$50k USD?
A lot can happen in 5 years. The OP could die. The bettor could die. And who knows, maybe the evidence of aliens is just deniable enough that it doesn’t cost reputation to claim a win.
That doesn’t extinguish the record of the bet, whoever is the heir to their assets would still be responsible for settling the bet, maybe not at the full amount, but some settlement would still be necessary.
And who knows, maybe the evidence of aliens is just deniable enough that it doesn’t cost reputation to claim a win.
LOL! If you think an executor (or worse, an heir if the estate is already settled) is going to pay $100K to a rando based on a 5-year old less-wrong post, you have a VERY different model of humanity than I do. Even more so if the estate didn’t include any mention of it or money earmarked for it.
How do the desires of possible executors/heirs/etc. factor into this?
Clearly the bet will not auto-extinguish and auto-erase itself regardless of the future desires of anyone.
If you thought I implied that the bet must be settled in purely monetary terms, that wasn’t my intention. It’s entirely possible for the majority, or entirety, of the bet to be settled with non-monetary currencies, such as social-status, reputation, etc…
It’s just not all that likely for someone, or their successors, to insist on going down that path.
(hit “see in context” to see the rest of my debate with lsusr)
Somehow it feels different at 0.5% though, as compared to the relatively even odds in the Yudkowsky-Caplan bet. (It’s not like I could earn, say, USD $200k in a few weeks before a deadline, like Eliezer could earn $100). 2% is getting closer to compensating for this issue for me though.
True, but you presumably have to have the ability to pay it someway or another, and that’s still resources that could have been available for something else (e.g. could have gone in to debt anyway, if something happened to warrant doing so).
I did interpret it as a 0.5% thing though, and now that the OP has stated they would be ok with 2% that makes it significantly less unattractive - Charlie Steiner’s offer, which OP provisionally accepted, seems not too far off from something I might want to copy.
However, the fact that OP is making this offer means, IMO, that they are likely to be convinced by evidence significantly less convincing that what I would be convinced by. So there’s a not unlikely possibility that 5 years from now if I accept we’ll get into an annoying debate over whether I’m trying to shirk on payment, when I’m just not convinced by whatever the latest UFO news is that he’s been convinced by. It’s also possible that other LessWrongers might also be convinced by such evidence that I wouldn’t be convinced by—consider how there seems to be a fair amount of belief here regarding the Nimitz incident that if Fravor wasn’t lying or exaggerating it must be something unusual like, if not aliens, then at least some kind of advanced technology (whereas I’ve pointed out that even if Fravor is honest and reasonably reliable (for a human), the evidence still looks compatible with conventional technology and normal errors/glitches).
That might be a hard-to-resolve sticking point since I don’t really consider it that unlikely that a large fraction of LessWrongers might (given Nimitz) be convinced by what I would consider to be weak evidence, and even if it was left to my discretion whether to pay, the reputational hit probably wouldn’t be worth the initial money.
BTW, I don’t consider it super unlikely that there are discoveries out there to be made that would be pretty ontologically surprising, it’s just that I mostly don’t expect them either to be behind UAPs or to be uncovered in the next 5 years (though I suppose AI developments could speed up revelations...)
I also note that some incidents do seem to me like they could possibly be deliberate hoaxes perpetrated within the government against other government employees who then, themselves sincere, spread it to the public (e.g. the current thing and maybe Bob Lazar). If I were to bet I would specifically disclaim paying out merely because such hoaxes were found to be carried out by some larger conspiracy which was also doing a lot of other stuff as well, even if sufficiently extensive to cause ontological shock—I am not comfortable betting against that at 2%. I would be OK, if I were otherwise satisfied with the bet, on paying out conditional on such a conspiracy being proven to have access to an ontologically shocking level of technology relative to the expected level of secret government tech.
Mere government hoax/psyop with no accompanying reality to non-prosaic uap would NOT resolve in my favor, no issue from me on that.
In a world where a sizeable fraction of LW becomes convinced I might win the bet, I would expect that I then wouldn’t have to wait very long before it then became conclusive, so I wouldn’t mind just waiting that out. If in that case, we then hit time horizon constraints before it was definitive to you, then depending on the specifics I definitely would not rule out appealing to the community (or specific ‘trusted’ individuals like Scott Alexander or Eliezer). I find this scenario unlikely to come to pass. I would of course in all cases commit to operating with you in good faith.
If you wish to extend that offer, I indeed will accept 50:1 (max bet size?). If you have any other concerns please let me know.
Regarding if there is evidence convincing to you, but not to me, after the five years:
If the LW community overwhelmingly agrees (say >85%) that my refusal to accept the evidence available as of 5 years from the time of the bet as overcoming the prior against ontologically surprising things being responsible for some “UAPs” was unreasonable, then I would agree to pay. I wouldn’t accept 50% of LessWrong having that view as enough, and don’t trust the judgement of particular individuals even if I trust them to be intelligent and honest.
Evidence that arises or becomes publicly available after the 5 years doesn’t count, even if the bet was still under dispute at the time of the new evidence.
I will also operate in good faith, but don’t promise not to be a stickler to the terms (see for example Bryan Caplan on his successful bet that no member nation of the EU with a population over 10 million would leave before 2020 (which he won despite the UK voting to leave in 2016) (Bet 10 at https://docs.google.com/document/d/1qShKedFJptpxfTHl9MBtHARAiurX-WK6ChrMgQRQz-0)
If you agree to these, in addition to what was discussed above, then I would be willing to offer $100k USD max bet for $2k USD now.
This is to publicly confirm that I have received approximately $2000 USD equivalent.
Unless you dispute what timing is appropriate for the knowledge cutoff, I will consider the knowledge cutoff for the paradigm-shattering UAP-related revelations for me to send you $100k USD to be 11:59pm, June 14, 2028 UTC time.
That’s not how I understand it literally. You don’t have to put it to the side/into some savings account. You just have to accept the risk that if you have to pay out in the unlikely case, you have to go into debt.
Yeah for some reason people come up with this absurd complicated mechanism for prediction bets that they don’t apply to pretty much any other form of debt, don’t know why this keeps happening but I’ve seen it elsewhere too.
Or take the risk that you’d feel bad by just … not paying. This is the one which should worry your counterparty, and which leads to escrow requirements.
Assuming the OP only accepts bets with accounts linked to a real world identity, or pseudonymous accounts with a very high reputation, such as gwern, I think it’s safe enough to not require an escrow.
Why would someone who’s built up a reputation in the LW/rationalist/etc. community wreck it, publicly and on-the-record, over <$50k USD?
They’d be sacrificing way more in future potential since no one will willingly work with a scoundrel.
A lot can happen in 5 years. The OP could die. The bettor could die. And who knows, maybe the evidence of aliens is just deniable enough that it doesn’t cost reputation to claim a win.
That doesn’t extinguish the record of the bet, whoever is the heir to their assets would still be responsible for settling the bet, maybe not at the full amount, but some settlement would still be necessary.
That’s already factored into the odds.
LOL! If you think an executor (or worse, an heir if the estate is already settled) is going to pay $100K to a rando based on a 5-year old less-wrong post, you have a VERY different model of humanity than I do. Even more so if the estate didn’t include any mention of it or money earmarked for it.
How do the desires of possible executors/heirs/etc. factor into this?
Clearly the bet will not auto-extinguish and auto-erase itself regardless of the future desires of anyone.
If you thought I implied that the bet must be settled in purely monetary terms, that wasn’t my intention. It’s entirely possible for the majority, or entirety, of the bet to be settled with non-monetary currencies, such as social-status, reputation, etc…
It’s just not all that likely for someone, or their successors, to insist on going down that path.
I made the same argument myself (lol) in response to lsusr regarding Eliezer’s bet with Bryan Caplan:
https://www.lesswrong.com/posts/BknXGnQSfccoQTquR/the-caplan-yudkowsky-end-of-the-world-bet-scheme-doesn-t?commentId=44YGGYcx8wZZpgiof
(hit “see in context” to see the rest of my debate with lsusr)
Somehow it feels different at 0.5% though, as compared to the relatively even odds in the Yudkowsky-Caplan bet. (It’s not like I could earn, say, USD $200k in a few weeks before a deadline, like Eliezer could earn $100). 2% is getting closer to compensating for this issue for me though.
True, but you presumably have to have the ability to pay it someway or another, and that’s still resources that could have been available for something else (e.g. could have gone in to debt anyway, if something happened to warrant doing so).
I did interpret it as a 0.5% thing though, and now that the OP has stated they would be ok with 2% that makes it significantly less unattractive - Charlie Steiner’s offer, which OP provisionally accepted, seems not too far off from something I might want to copy.
However, the fact that OP is making this offer means, IMO, that they are likely to be convinced by evidence significantly less convincing that what I would be convinced by. So there’s a not unlikely possibility that 5 years from now if I accept we’ll get into an annoying debate over whether I’m trying to shirk on payment, when I’m just not convinced by whatever the latest UFO news is that he’s been convinced by. It’s also possible that other LessWrongers might also be convinced by such evidence that I wouldn’t be convinced by—consider how there seems to be a fair amount of belief here regarding the Nimitz incident that if Fravor wasn’t lying or exaggerating it must be something unusual like, if not aliens, then at least some kind of advanced technology (whereas I’ve pointed out that even if Fravor is honest and reasonably reliable (for a human), the evidence still looks compatible with conventional technology and normal errors/glitches).
That might be a hard-to-resolve sticking point since I don’t really consider it that unlikely that a large fraction of LessWrongers might (given Nimitz) be convinced by what I would consider to be weak evidence, and even if it was left to my discretion whether to pay, the reputational hit probably wouldn’t be worth the initial money.
BTW, I don’t consider it super unlikely that there are discoveries out there to be made that would be pretty ontologically surprising, it’s just that I mostly don’t expect them either to be behind UAPs or to be uncovered in the next 5 years (though I suppose AI developments could speed up revelations...)
I also note that some incidents do seem to me like they could possibly be deliberate hoaxes perpetrated within the government against other government employees who then, themselves sincere, spread it to the public (e.g. the current thing and maybe Bob Lazar). If I were to bet I would specifically disclaim paying out merely because such hoaxes were found to be carried out by some larger conspiracy which was also doing a lot of other stuff as well, even if sufficiently extensive to cause ontological shock—I am not comfortable betting against that at 2%. I would be OK, if I were otherwise satisfied with the bet, on paying out conditional on such a conspiracy being proven to have access to an ontologically shocking level of technology relative to the expected level of secret government tech.
Mere government hoax/psyop with no accompanying reality to non-prosaic uap would NOT resolve in my favor, no issue from me on that.
In a world where a sizeable fraction of LW becomes convinced I might win the bet, I would expect that I then wouldn’t have to wait very long before it then became conclusive, so I wouldn’t mind just waiting that out. If in that case, we then hit time horizon constraints before it was definitive to you, then depending on the specifics I definitely would not rule out appealing to the community (or specific ‘trusted’ individuals like Scott Alexander or Eliezer). I find this scenario unlikely to come to pass. I would of course in all cases commit to operating with you in good faith.
If you wish to extend that offer, I indeed will accept 50:1 (max bet size?). If you have any other concerns please let me know.
Regarding if there is evidence convincing to you, but not to me, after the five years:
If the LW community overwhelmingly agrees (say >85%) that my refusal to accept the evidence available as of 5 years from the time of the bet as overcoming the prior against ontologically surprising things being responsible for some “UAPs” was unreasonable, then I would agree to pay. I wouldn’t accept 50% of LessWrong having that view as enough, and don’t trust the judgement of particular individuals even if I trust them to be intelligent and honest.
Evidence that arises or becomes publicly available after the 5 years doesn’t count, even if the bet was still under dispute at the time of the new evidence.
I will also operate in good faith, but don’t promise not to be a stickler to the terms (see for example Bryan Caplan on his successful bet that no member nation of the EU with a population over 10 million would leave before 2020 (which he won despite the UK voting to leave in 2016) (Bet 10 at https://docs.google.com/document/d/1qShKedFJptpxfTHl9MBtHARAiurX-WK6ChrMgQRQz-0)
If you agree to these, in addition to what was discussed above, then I would be willing to offer $100k USD max bet for $2k USD now.
This is more than acceptable for me. Please reach out for a way for me to pay you.
This is to publicly confirm that I have received approximately $2000 USD equivalent.
Unless you dispute what timing is appropriate for the knowledge cutoff, I will consider the knowledge cutoff for the paradigm-shattering UAP-related revelations for me to send you $100k USD to be 11:59pm, June 14, 2028 UTC time.
Glad we could make this bet!