It might very well be cultural. As an American I literally cannot imagine doing either of those things with a woman I am not in a relationship with. I’ve never seen anybody else do them either.
lc
Manifold Love: pro-tip: if a woman measures her hand against yours, this is almost always flirtation.
Totally did not know this. Is this true? [10% react x 2]
A little taken aback by this response. It’s not just flirting, it’s outright romantic. Asking this is like asking if a woman resting their head on a mans chest and purring is “flirting”. I didn’t realize this was a common experience for guys not in a relationship with the particular woman.
Also my impression is that business or political assassinations exist to this day in many countries; a little searching suggests Russia, Mexico, Venezuela, possibly Nigeria, and more.
Oh definitely. In Mexico in particular business pairs up with organized crime all of the time to strong-arm competitors. But this happens when there’s an “organized crime” tycoons can cheaply (in terms of risk) pair up with. Also, OP asked about why companies don’t assassinate whistlebowers all the time specifically.
a lot of hunter-gatherer people had to be able to fight to the death, so I don’t buy that it’s entirely about the human constitution
That was not criminal murder by the standards of the time. Arguably a lot of gang murders committed in the United States are committed by people not capable or willing to go out and murder people on their own.
Robin Hanson has apparently asked the same thing. It seems like such a bizarre question to me:
Most people do not have the constitution or agency for criminal murder
Most companies do not have secrets large enough that assassinations would reduce the size of their problems on expectation
Most people who work at large companies don’t really give a shit if that company gets fined or into legal trouble, and so they don’t have the motivation to personally risk anything organizing murders to prevent lawsuits
Either would just change everything, so any prediction ten years out you basically have to prepend “if AI or gene editing doesn’t change everything”
We will witness a resurgent alt-right movement soon, this time facing a dulled institutional backlash compared to what kept it from growing during the mid-2010s. I could see Nick Fuentes becoming a Congressman or at least a major participant in Republican party politics within the next 10 years if AI/Gene Editing doesn’t change much.
I seriously doubt on priors that Boeing corporate is murdering employees.
Why aren’t males way smarter than females on average? Males have ~13% higher cortical neuron density and 11% heavier brains...
Men are smarter than women, by about 2-4 points on average. Men are also larger, and so need bigger brains to compensate for their size (though this does not explain the entire difference you cite).
I’m white.
As a useless anecdote, I took Lumina in November of last year. I generally drink a lot, and have commented on hangovers getting 2-4x worse in the past few months to friends, before reading this post or knowing anything about your hypothesis. This has occurred only in the last few months and I’m 24 years old.
How is this different from the situation in the late 19th century when only a few things left seemed to need a “consensus explanation”?
One might worry that it is difficult to set benchmarks of success for alignment research. Is a Newtonian understanding of gravitation sufficient to attempt a Moon landing, or must one develop a complete theory of general relativity before believing that one can land softly on the Moon?3
In the case of AI alignment, there is at least one obvious benchmark to focus on initially. Imagine we had access to an incredibly powerful computer with access to the internet, an automated factory, and large sums of money. If we could program that computer to reliably achieve some simple goal (such as producing as much diamond as possible), then a large share of the AI alignment research would be completed.
Are we close to meeting this benchmark?
I would like to ask a followup question: since we don’t have a unified theory of physics yet, why isn’t adopting strongly any one of these nonpredictive interpretations premature? It seems like trying to “interpret” gravity without knowing about general relativity.
QQQ 640 (3y), SPY 750 (3y), VTI 340 (2y), SMH 290 (2y). Those were the latest expiration dates I could get.
Those SPX options look nice too, though I wish I could pay for a derivative that only paid out if the market jumped 100% in a single year, rather than say 15% per year throughout the rest of the 2020s.
Note: there was previously an awful typo here; the third bullet said “buying individual tech stocks” instead of “instead of buying individual tech stocks”. The reason I’m posting about this is because it seems higher expected value than buying and holding e.g. NVDA or call options on NVDA. I wish I had caught this typo sooner as the previous post didn’t make any sense.
-
The market makers don’t seem to be talking about it at all, and conversations I have with e.g. commodities traders says the topic doesn’t come up at work. Nowadays they talk about AI, but in terms of its near-term effects on automation, not to figure out if it will respect their property rights or something.
-
Large public AI companies like NVDA, which I would expect to be priced mostly based on long-run projections of AI usage, have been consistently bid up after earnings, as if the stock market is constantly readjusting their expectations of AGI takeoff by the amount that NVDA is personally earning each quarter rather than using those earnings to inform technical timelines. I think it’s more likely that they’re saying something close to “look! Nvidia’s revenues are rising!” and “wow, Nvidia has grown pretty consistently, we should increase the premium on their call options” and not really much beyond that.
-
Current NASDAQ futures prices are business as usual. There are only two ways to account for this prices if they are pricing things in; either they thing slow takeoff is extraordinarily (<1%) unlikely to occur before 2030, or extremely unlikely to lead to lots of growth, or both. Either of these seem like strange conclusions to me that would require unusually strong understanding of the tech tree and policy response, but as I mentioned, they’re not even talking about it so how would they know?
-
“Pricing this in” would require entire nation-states worth of capital. Even if there’s one ten billion dollar hedge fund out there that is considering these issues deeply, it wouldn’t have the power to move markets to where I think they ought to be.
-
AGI takeoff is completely out of distribution for the Great Financial Machine Learning System, being an event which has never happened before, that would break more invariants about how economies work and grow than any black swan event since the dawn of public stock exchanges. There’s no strong reason to believe, a priori, that hedge funds are selected to account for it in the same way they are selected to correctly predict fed rate adjustments, besides basic reasons like “hedge funds are filled with high IQ people”. A similar, weaker reason explains why it was a good idea to buy put options on the market in February 2020.
-
I do have call options on ETFs like QQQ, which are very tech-heavy, as well as SMH, which are baskets of semiconductor companies. But buying calls on individual tech stock options incurs a larger premium, because market makers see stocks as much more volatile than indices. So they’re willing to sell you options on e.g. VTI for much less, because it’s the entire stock market and that’s never appreciated more than like 50% in a single year or something. My thesis is that market makers are making a mistake, here, and so it’s higher expected value to buy call options on indices rather than companies with an AI component.
I will add this to the FAQ because I think the article doesn’t make it clear.
I’m sure you’ll have fun in general, but you would literally have a better chance of finding a girlfriend at a magic the gathering tournament.