Engineer at CoinList.co. Donor to LW 2.0.
Here, there’s minimal dependence on ACT, but a negative dependence on ACT2, meaning that extreme ACT scores (high or low) both lead to lower likely-to-graduate scores.Does that seem counterintuitive to you? Remember, we are taking a student who is already enrolled in a particular known college and predicting how likely that are to graduate from that college.
Here, there’s minimal dependence on ACT, but a negative dependence on ACT2, meaning that extreme ACT scores (high or low) both lead to lower likely-to-graduate scores.
Does that seem counterintuitive to you? Remember, we are taking a student who is already enrolled in a particular known college and predicting how likely that are to graduate from that college.
Sounds like a classic example of Simpson’s paradox, no?
My current theory for what happened is that everyone bought into this delusion about the value of bitcoin, but that unlike other bubbles it didn’t burst because Bitcoin has a limited supply and there is literally nothing to anchor its value. So there’s no point where investors give up and sell because there is literally no point at which it’s overpriced.
This actually sounds pretty close to what you might call the “bubble theory of money”: that money is a bubble that doesn’t pop, that certain (relatively) useless commodities can become money if enough people think of them that way, and when that happens their price is inflated, relative to their use value.
This isn’t something that will happen to every commodity. Whether it happens depends both on the properties of the commodity, and also on things like memes and Schelling points.
Bitcoin has enough useful properties (it’s like gold, but digital), and, because of its first-mover advantage, is the Schelling point for digital store-of-value (not that it couldn’t be replaced, but it’s a very up-hill battle), so it has become money, in this sense.
(On the memes-and-Schelling-points thing, see also: The Most Important Scarce Resource is Legitimacy, by Vitalik Buterin.)
the first 5-12 million dollar tab
You mean GPT-3? Are you asking whether it’s made enough money to pay for itself yet?
I believe that you (and the Twitter thread) are saying something meaningful, but I’m having trouble parsing it.
I had thought of the difference between variance and volatility as just that one is the square of the other. So saying that the VIX is “variance in vol units, but not volatility” doesn’t mean anything to me.
I think these are the critical tweets:
VIX is an index that measures the market implied level of 1-month variance on the S&P 500, or the square root thereof (to put it back in units we are used to).This is not the same as volatility. A variance swap’s payoff is proportional to volatility squared. If you are short a variance swap at 10%, and then realized volatility turns out to be 40%, you lose your notional vega exposure times 16 (= 40^2 / 10^2 ).To compensate for this, an equity index variance swap level is usually 2-3 points above the corresponding at the money implied volatility. So don’t look at VIX versus realized vol and make statements about risk premium without recognizing this extreme tail risk.
VIX is an index that measures the market implied level of 1-month variance on the S&P 500, or the square root thereof (to put it back in units we are used to).
This is not the same as volatility. A variance swap’s payoff is proportional to volatility squared. If you are short a variance swap at 10%, and then realized volatility turns out to be 40%, you lose your notional vega exposure times 16 (= 40^2 / 10^2 ).
To compensate for this, an equity index variance swap level is usually 2-3 points above the corresponding at the money implied volatility. So don’t look at VIX versus realized vol and make statements about risk premium without recognizing this extreme tail risk.
I was with him at “a variance swap’s payoff is proportional to volatility squared”. That matches my understanding of volatility as the square root of variance. But then I don’t get the next point about realized volatility needing to be “compensated for”.
Anybody care to explain?
Link for the curious: https://www.newyorker.com/culture/annals-of-inquiry/slate-star-codex-and-silicon-valleys-war-against-the-media
Which all make them even easier targets for criticism, and make confident enthusiasm for an idea increasingly correlated with being some kind of arrogant fool.
But it also means conviction is undervalued, and it might be a good time to buy low!
I hold positions in Bitcoin, Ethereum, and Tesla through Exchange Traded Funds.
For Bitcoin and Ether, do you mean the Grayscale trusts, GBTC and ETHE? My impression is that these are similar to ETFs, but not exactly the same thing, and I’m not aware of other ETFs that give you exposure to crypto (except for the small amount of exposure you’d get from owning shares in companies that have a little BTC on their balance sheet, like Tesla, Square, or MicroStrategy).
The difference between a TSAR bomb (or its modern equivalent) and the lowest settings of a mini-nuke is still an order of magnitude larger than the difference between the conventional “mother of all bombs” and a hand grenade. The Beirut explosion last year was the size of the hand grenade blast in this analogy
I didn’t quite understand the last sentence here. Are you saying A) that the Beirut explosion was about the same size as a mini-nuke blast would be, or that B) MOAB : hand grenade :: TSAR bomb : Beirut explosion? (In which case the Beirut explosion would be larger than a mini-nuke explosion, if your claim about relative differences in the first sentence is correct.)In other words, I take the first part of what you wrote to be saying that (TSAR bomb / mini-nuke) > (MOAB / grenade), but then I’m not sure whether the second part is saying that A) (TSAR bomb / Beirut explosion) = (TSAR bomb / mini-nuke), or B) (TSAR bomb / Beirut explosion) = (MOAB / grenade).Is one of either A or B correct? (Or did you mean something else entirely?)
sometimes people think of things as being either X or Y, and then learn an argument for why this dichotomy doesn’t make sense. As a result, they might reject the dichotomy entirely
This reminds me of the Fallacy of Gray.
I’m definitely left wondering what AI Alignment research is left at OpenAI
You may be interested to know that Jan Leike recently joined OpenAI and will lead their alignment team.
Suppose you want to bet on interest rates rising—would buying value stocks and shorting growth stocks be a good way to do it? (With the idea being that, if rates rise, future earnings will be discounted more and present earnings valued relatively more highly.)
And separately from whether long-value-short-growth would work, is there a more canonical or better way to bet on rates rising?
Just shorting bonds, perhaps? Is that the best you can do?
(Crossposted from Twitter)
Got it, thanks for the clarification.
Hmm, maybe it’s worth distinguishing two things that “mental states” might mean:
intermediate states in the process of executing some cognitive algorithm, which have some data associated with them
phenomenological states of conscious experience
I guess you could believe that a p-zombie could have #1, but not #2.
Consciousness/subjective experience describes something that is fundamentally non-material.
More non-material than “love” or “three”?
It makes sense to me to think of “three” as being “real” in some sense independently from the existence of any collection of three physical objects, and in that sense having a non-material existence. (And maybe you could say the same thing for abstract concepts like “love”.)
And also, three-ness is a pattern that collections of physical things might correspond to.
Do you think of consciousness as being non-material in a similar way? (Where the concept is not fundamentally a material thing, but you can identify it with collections of particles.)
If you just assume that there’s no primitive for consciousness, I would agree that the argument for illusionism is extremely strong since [unconscious matter spontaneously spawning consciousness] is extremely implausible.
How is this implausible at all? All kinds of totally real phenomena are emergent. There’s no primitive for temperature, yet it emerges out of the motions of many particles. There’s no primitive for wheel, but round things that roll still exist.
Maybe I’ve misunderstood your point though?
This is a familiar dialectic in philosophical debates about whether some domain X can be reduced to Y (meta-ethics is a salient comparison to me). The anti-reductionist (A) will argue that our core intuitions/concepts/practices related to X make clear that it cannot be reduced to Y, and that since X must exist (as we intuitively think it does), we should expand our metaphysics to include more than Y. The reductionist (R) will argue that X can in fact be reduced to Y, and that this is compatible with our intuitions/concepts/everyday practices with respect to X, and hence that X exists but it’s nothing over and above Y. The nihilist (N), by contrast, agrees with A that it follows from our intuitions/concepts/practices related to X that it cannot be reduced to Y, but agrees with D that there is in fact nothing over and above Y, and so concludes that there is no X, and that our intuitions/concepts/practices related to X are correspondingly misguided. Here, the disagreement between A vs. R/N is about whether more than Y exists; the disagreement between R vs. A/N is about whether a world of only Y “counts” as a world with X. This latter often begins to seem a matter of terminology; the substantive questions have already been settled.
Is this a well-known phenomenon? I think I’ve observed this dynamic before and found it very frustrating. It seems like philosophers keep executing the following procedure:
Take a sensible, but perhaps vague, everyday concept (e.g. consciousness, or free will), and give it a precise philosophical definition, but bake in some dubious, anti-reductionist assumptions into the definition.
Discuss the concept in ways that conflate the everyday concept and the precise philosophical one. (Failing to make clear that the philosophical concept may or may not be the best formalization of the folk concept.)
Realize that the anti-reductionist assumptions were false.
Claim that the everyday concept is an illusion.
Generate confusion (along with full employment for philosophers?).
If you’d just said that the precisely defined philosophical concept was a provisional formalization of the everyday concept in the first place, then you wouldn’t have to claim that the everyday concept was an illusion once you realize that your formalization was wrong!
No one ever thought that phenomenal zombies lacked introspective access to their own mental states
I’m surprised by this. I thought p-zombies were thought not to have mental states.
I thought the idea was that they replicated human input-output behavior while having “no one home”. Which sounds to me like not having mental states.
If they actually have mental states, then what separates them from the rest of us?
This may be a bit of a pedantic comment, but I’m a bit confused by how your comment starts:
I’ve done over 200 hours of research on this topic and have read basically all the sources the article cites. That said, I don’t agree with all of the claims.
The “That said, …” part seems to imply that what follows is surprising. As though the reader expects you to agree with all the claims. But isn’t the default presumption that, if you’ve done a whole bunch of research into some controversial question, that the evidence is mixed?
In other words, when I hear, “I’ve done over 200 hours of research … and have read … all the sources”, I think, “Of course you don’t agree with all the claims!” And it kind of throws me off that you seem to expect your readers to think that you would agree with all the claims.
Is the presumption that someone would only spend a whole bunch of hours researching these claims if they thought they were highly likely to be true? Or that only an uncritical, conspiracy theory true believer would put in so much time into looking into it?
I used SPX Dec ’22, 2700⁄3000 (S&P was closer to those prices when I entered the position). And smart routing I think. Whatever the default is. I didn’t manually choose an exchange.