Just like the last 12 months was the time of the chatbots, the next 12 months will be the time of agent-like AI product releases.
Jonas V
LessWrong readers are invited to apply to the Lurkshop
Atlantis: Berkeley event venue available for rent
You can now apply to EA Funds anytime! (LTFF & EAIF only)
Apply to Effective Altruism Funds now
Yeah, that does also feel right to me. I have been thinking about setting up some fund that maybe buys up a bunch of the equity that’s held by safety researchers, so that the safety researchers don’t have to also blow up their financial portfolio when they press the stop button or do some whistleblowing or whatever, and that does seem pretty incentive wise.
I’m interested in helping with making this happen.
Having another $1 billion to prevent AGI x-risk would be useful because we could spend it on large compute budgets for safety research teams.
Upvoted, I would like to see Berlin considered more strongly. Having lived there for two years, I think it’s hard to overestimate how high the quality of living in Berlin is, not just in the easily verifiable ways listed above, but also in more subtle ways. E.g., in addition to being much cheaper, restaurants/cuisine just generally seems higher quality compared to many other places. German housing is much better than UK/US housing in ways that seem hard to appreciate for people who haven’t lived in both locations, etc.
Edit: To clarify, I don’t want to suggest Berlin as the one single best rationalist hub, but as one of the global top 5.
To add some downsides:
The language barrier is still a bit of an issue if you care about making friends outside the rationalist community
The airports are among the worst in the worldNot true anymore (finally)
Having another $1 billion to prevent AGI x-risk would be useful because we could spend it on large-scale lobbying efforts in DC.
Investing in early-stage AGI companies helps with reducing x-risk (via mission hedging, having board seats, shareholder activism)
Very interesting conversation!
I’m surprised by the strong emphasis of shorting long-dated bonds. Surely there’s a big risk of nominal interest rates coming apart from real interest rates, i.e. lots of money getting printed? I feel like it’s going to be very hard to predict what the Fed will do in light of 50% real interest rates, and Fed interventions could plausibly hurt your profits a lot here.
(You might suggest shorting long-dated TIPS, but those markets have less volume and higher borrow fees.)
I’m sure that wasn’t easy, congrats for going through with it and posting such a transparent write-up of your thinking!
Having another $1 billion to prevent AGI x-risk would be pretty useful.
You mean an AI ETF? My answer is no; I think making your own portfolio (based on advice in this post and elsewhere) will be a lot better.
Confirm.
I think more like you don’t argue why you believe what you believe and instead just assert it’s cool, and the whole thing looks a bit sloppy (spelling mistakes, all-caps, etc.)
This looked really reasonable until I saw that there was no NVDA in there; why’s that? (You might say high PE, but note that Forward PE is much lower.)
Doing a post-mortem on sapphire’s other posts, their track record is pretty great:
BTC/crypto liftoff prediction: +22%
Meta DAO: +1600%
SAVE: −60%
BSC Launcher: −100%?
OLY2021: +32%
Perpetual futures: +20%
Perpetual futures, DeFi edition: +15%
Bet on Biden: +40%
AI portfolio: approx. −5% compared to index over same time period
AI portfolio, second post: approx. +30% compared to index over same time period
OpenAI/MSFT: ~0%
Buy SOL: +1000%
There are many more that I didn’t look into.
All of these were over a couple weeks/months, so if you just blindly put 10% of your portfolio into each of the above, you get very impressive returns. (Overall, roughly ~5x relative to the broad market.)
There’s some evidence from 2013 suggesting that long-dated, out-of-the-money call options have strongly negative EV; common explanations are that some buyers like gambling and drive up prices. See this article. I also heard that over the last decade, some hedge funds therefore adopted the strategy of writing OTM calls on stocks they hold to boost their returns, and also heard that some of these hedge funds disappeared a couple years ago.
Has anyone looked into whether 1) this has replicated more recently, 2) how much worse it makes some of the suggested strategies (if at all)?
The current AI x-risk grantmaking ecosystem is bad and could be improved substantially.