I don’t think the problem you’re running into is a problem with making bets, it’s a problem with leverage.
Heck, you’ve already figured out how to place a bet that’ll pay off in future, but pay you money now: a loan. Combined with either the implicit bet on the end of the world freeing you from repayment, or an explicit one with a more-AI-skeptical colleague, this gets you your way of betting on AI risk that pays now.
Where it falls short is that most loanmaking organisations will at most offer you slightly more than the collateral you can put up. Because, well, to most loanmaking organisations you’re just a big watery bag of counterparty risk, and if they loan you substantially more than your net worth they’re very unlikely to get it back—even if you lose your bet!
But this is a problem people have run into before! Every day there are organisations who want to get lots more cash than they can put up in collateral in order to make risky investments that might not pay off. Those organisations sell shares. Shares entitle the buyer to a fraction of the uncertain future revenues, and it’s that upside risk—the potential for the funder to make a lot more money than was put in—that separates them from loans.
Now as an individual you’re cut off from stock markets. The closest approximation available is venture capital. That gives you almost everything you want, except that it requires you come up with a way to monetise your beliefs.
The other path is to pay your funders in expected better-worlds, and that takes you to the door of charitable funding. Here I’m thinking both of places like the LTFF and SAF, and more generally of HNW funders themselves. The former is pretty accessible, but limited in its capacity. The latter is less accessible, but with much greater capacity. In both cases they expect more than just a bet thesis; they require a plan to actually pay them back some better-worlds!
It’s worth noting that if you actually have a plan—even a vague one! - for reducing the risk from short AI timelines, then you shouldn’t have much trouble getting some expenses out of LTFF/SAF/etc to explore it. They’re pretty generous. If you can’t convince them of your plan’s value—then in all honesty your plan likely needs more work. If you can convince them, it’s a solid path to substantially more direct funding.
But those, I think, are the only possible solutions to your issue. They all have some sort of barrier to entry, but that’s necessary because from the outside you’re indistinguishable from any other gambler!
I don’t think the problem you’re running into is a problem with making bets, it’s a problem with leverage.
Heck, you’ve already figured out how to place a bet that’ll pay off in future, but pay you money now: a loan. Combined with either the implicit bet on the end of the world freeing you from repayment, or an explicit one with a more-AI-skeptical colleague, this gets you your way of betting on AI risk that pays now.
Where it falls short is that most loanmaking organisations will at most offer you slightly more than the collateral you can put up. Because, well, to most loanmaking organisations you’re just a big watery bag of counterparty risk, and if they loan you substantially more than your net worth they’re very unlikely to get it back—even if you lose your bet!
But this is a problem people have run into before! Every day there are organisations who want to get lots more cash than they can put up in collateral in order to make risky investments that might not pay off. Those organisations sell shares. Shares entitle the buyer to a fraction of the uncertain future revenues, and it’s that upside risk—the potential for the funder to make a lot more money than was put in—that separates them from loans.
Now as an individual you’re cut off from stock markets. The closest approximation available is venture capital. That gives you almost everything you want, except that it requires you come up with a way to monetise your beliefs.
The other path is to pay your funders in expected better-worlds, and that takes you to the door of charitable funding. Here I’m thinking both of places like the LTFF and SAF, and more generally of HNW funders themselves. The former is pretty accessible, but limited in its capacity. The latter is less accessible, but with much greater capacity. In both cases they expect more than just a bet thesis; they require a plan to actually pay them back some better-worlds!
It’s worth noting that if you actually have a plan—even a vague one! - for reducing the risk from short AI timelines, then you shouldn’t have much trouble getting some expenses out of LTFF/SAF/etc to explore it. They’re pretty generous. If you can’t convince them of your plan’s value—then in all honesty your plan likely needs more work. If you can convince them, it’s a solid path to substantially more direct funding.
But those, I think, are the only possible solutions to your issue. They all have some sort of barrier to entry, but that’s necessary because from the outside you’re indistinguishable from any other gambler!