I’m white.
lc
As a useless anecdote, I took Lumina in November of last year. I generally drink a lot, and have commented on hangovers getting 2-4x worse in the past few months to friends, before reading this post or knowing anything about your hypothesis. This has occurred only in the last few months and I’m 24 years old.
How is this different from the situation in the late 19th century when only a few things left seemed to need a “consensus explanation”?
One might worry that it is difficult to set benchmarks of success for alignment research. Is a Newtonian understanding of gravitation sufficient to attempt a Moon landing, or must one develop a complete theory of general relativity before believing that one can land softly on the Moon?3
In the case of AI alignment, there is at least one obvious benchmark to focus on initially. Imagine we had access to an incredibly powerful computer with access to the internet, an automated factory, and large sums of money. If we could program that computer to reliably achieve some simple goal (such as producing as much diamond as possible), then a large share of the AI alignment research would be completed.
Are we close to meeting this benchmark?
I would like to ask a followup question: since we don’t have a unified theory of physics yet, why isn’t adopting strongly any one of these nonpredictive interpretations premature? It seems like trying to “interpret” gravity without knowing about general relativity.
QQQ 640 (3y), SPY 750 (3y), VTI 340 (2y), SMH 290 (2y). Those were the latest expiration dates I could get.
Those SPX options look nice too, though I wish I could pay for a derivative that only paid out if the market jumped 100% in a single year, rather than say 15% per year throughout the rest of the 2020s.
Note: there was previously an awful typo here; the third bullet said “buying individual tech stocks” instead of “instead of buying individual tech stocks”. The reason I’m posting about this is because it seems higher expected value than buying and holding e.g. NVDA or call options on NVDA. I wish I had caught this typo sooner as the previous post didn’t make any sense.
-
The market makers don’t seem to be talking about it at all, and conversations I have with e.g. commodities traders says the topic doesn’t come up at work. Nowadays they talk about AI, but in terms of its near-term effects on automation, not to figure out if it will respect their property rights or something.
-
Large public AI companies like NVDA, which I would expect to be priced mostly based on long-run projections of AI usage, have been consistently bid up after earnings, as if the stock market is constantly readjusting their expectations of AGI takeoff by the amount that NVDA is personally earning each quarter rather than using those earnings to inform technical timelines. I think it’s more likely that they’re saying something close to “look! Nvidia’s revenues are rising!” and “wow, Nvidia has grown pretty consistently, we should increase the premium on their call options” and not really much beyond that.
-
Current NASDAQ futures prices are business as usual. There are only two ways to account for this prices if they are pricing things in; either they thing slow takeoff is extraordinarily (<1%) unlikely to occur before 2030, or extremely unlikely to lead to lots of growth, or both. Either of these seem like strange conclusions to me that would require unusually strong understanding of the tech tree and policy response, but as I mentioned, they’re not even talking about it so how would they know?
-
“Pricing this in” would require entire nation-states worth of capital. Even if there’s one ten billion dollar hedge fund out there that is considering these issues deeply, it wouldn’t have the power to move markets to where I think they ought to be.
-
AGI takeoff is completely out of distribution for the Great Financial Machine Learning System, being an event which has never happened before, that would break more invariants about how economies work and grow than any black swan event since the dawn of public stock exchanges. There’s no strong reason to believe, a priori, that hedge funds are selected to account for it in the same way they are selected to correctly predict fed rate adjustments, besides basic reasons like “hedge funds are filled with high IQ people”. A similar, weaker reason explains why it was a good idea to buy put options on the market in February 2020.
-
I do have call options on ETFs like QQQ, which are very tech-heavy, as well as SMH, which are baskets of semiconductor companies. But buying calls on individual tech stock options incurs a larger premium, because market makers see stocks as much more volatile than indices. So they’re willing to sell you options on e.g. VTI for much less, because it’s the entire stock market and that’s never appreciated more than like 50% in a single year or something. My thesis is that market makers are making a mistake, here, and so it’s higher expected value to buy call options on indices rather than companies with an AI component.
I will add this to the FAQ because I think the article doesn’t make it clear.
My simple AGI investment & insurance strategy
Lex asks if the incident made Altman less trusting. Sam instantly says yes, that he thinks he is an extremely trusting person who does not worry about edge cases, and he dislikes that this has made him think more about bad scenarios. So perhaps this could actually be really good? I do not want someone building AGI who does not worry about edge cases and assumes things will work out and trusting fate. I want someone paranoid about things going horribly wrong and not trusting a damn thing without a good reason.
Eh… I think you and him are worried about different things.
The most salient example of the bias I can think of comes from reading interviews/books about the people who worked in the extermination camps in the holocaust. In my personal opinion, all the evidence points to them being literally normal people, representative of the average police officer or civil service member pre-1931. Holocaust historians nevertheless typically try very hard to outline some way in which Franz Stangl and crew were specially selected for lack of empathy, instead of raising the more obvious hypothesis that the median person is just not that upset by murdering strangers in a mildly indirected way, because the wonderful-humans bias demands a different conclusion.
This goes double in general for the entire public conception of killing as the most evil-feeling thing that humans can do, contrasted with actual memoirs of soldiers and the like who typically state that they were surprised how little they cared compared to the time they lied to their grandmother or whatever.
Rewrote to be more clear.
The “people are altruistic” bias is so pernicious and widespread I’ve never actually seen it articulated in detail or argued for. Most seem to both greatly underestimate the size of this bias, and assume opinions either way are a form of mind-projection fallacy on the part of nice/evil people. In fact, it looks to me like this skew is the deeper origin of a lot of other biases, including the just-world fallacy, and the cause of a lot of default contentment with a lot of our institutions of science, government, etc. You could call it a meta-bias that causes the Hansonian stuff to go largely unnoticed.
I would be willing to pay someone to help draft a LessWrong post for me about this; I think it’s important but my writing skills are lacking.
This is a crazy hit rate. Someone should give sapphire 100MM$ to trade with.
Are you a prosecutor/judge?
Was wondering how a criminal defense attorney could have ever believed that police shouldn’t exist until I go to the end!
Do happy people ever do couple’s counseling for the same reason that mentally healthy people sometimes do talk therapy?
Ok, then that sounds like a criticism of utilitarians, or maybe people, and not utilitarianism. Also, my point didn’t even mention utilitarianism, so what does that have to do with the above?
Men are smarter than women, by about 2-4 points on average. Men are also larger, and so need bigger brains to compensate for their size (though this does not explain the entire difference you cite).