Yes. That is a limitation of the model.
No, it’s mot *my* model.
But within the model, marketing is one of the factors that determines the quality of a mousetrap.
Yes. That is a limitation of the model.
No, it’s mot *my* model.
But within the model, marketing is one of the factors that determines the quality of a mousetrap.
Phrase your predictions in the manner such that if you said “I bet [statement] at the odds implied by N% certainty”, there would be more money bet against you than money offered at better odds.
The odds that there’s some serious side effect that isn’t extinction-level are many orders of magnitude higher, and the approval system was made in advance with the full knowledge and careful consideration of the potential of epidemics.
Yes, GEM does not permit there to be money on the ground. It’s one of the limitations of the model.
Which is a great reason to not use it to model situations where there is money on the ground.
There’s no way to profit off of people preferring to buy mousetraps that are lest effective at trapping mice because, in that framing, the best mousetrap is *not* the one that is most effective at trapping mice.
You also can’t profit off of people buying fake 0-day exploits, unless you’re selling the best 0-days.
Naive algorithmic anything-optimization will not make those subtle trade-offs. Metric maximization run on humans is already a major failure point of large businesses, and the best an AI that uses metrics can do is draw awareness to the fact that the metrics that don’t start out bad become bad over time.
e.g. microprocessor research, psychology as applied to advertising.
Within the constraints of GEM, a ‘better mousetrap’ is one that customers agree is better.
The best theory isn’t the one that most accurately predicts reality, it’s the one that the academics agree is best.
Just as situations change and the best mousetrap for securing a clay granary is not the best mousetrap for a nursery, the best theory is not constant over time- but when a new theory becomes best, it is coincident with it becoming the academic standard.
There’s a middle ground between having an organization be profitable, and an organization optimizing for profitability.
I’ll go one step further: Anything that can be a for-profit loop already is.
> a product to certain beneficiaries, whether clean water for poor rural areas, or a non-rival good such as research for the world at large.
If those rural areas would be able to purchase clean water, or a way of producing their own clean water, in a manner profitable to investors in the rural water service area, they would.
Where the world at large pays for research results, those fields are privately funded.
> If anything, the problem is worse, because now the company has eliminated most of the “warning bells”—the more-frequent fires which are big but not disastrous.
Why would preventing small fires, which are qualitatively different from and causally unrelated to supervolcano eruptions, eliminate any of the “warning bells” suggesting that supervolcano eruptions are a thing?
“Ignoring breakdowns of the model” means the same thing as “using the model where it is useless”; that can serve an illustrative purpose, but it means that in order to apply that metaphor to something real, you must first demonstrate that the negative impact of that thing /actually/ follows the behavior of the power law even for very large N; you can’t just observe it for small N and extrapolate.
For example, insurance companies have a hard cap on liability. If every policy they have outstanding is filed for the policy limit, there is no additional source of liability to be had- their tail actually has a hard cutoff. That still allows actual claims to exactly match a power law for all observed cases.
I was speaking of inequality generally, not specifically housing inequality.
The entire point was a cheap shot at people who think that inequality is inherently bad, like suggesting destroying all the value to eliminate all the inequality.
There’s a very tiny percentage chance that there’s a completely unexpected long-term complication. Widespread distribution and vaccination with such a complication could be extinction-level.
Having a high-drama discussion fully public violates a heuristic of “don’t air your dirty laundry in public”, and I don’t understand that heuristic enough to advocate it.
those who are emotionally central to you, no matter the distance
I suspect that it would be better to build a norm of “People who are physically distant from you and important”, because there will be social pressure (even unintentional) from people who are physically nearby to be declared ‘most important’, and a norm of preferring to include distant people partially counters that.
It looks like an exhaust port that incorporated a heat sink and moisture separator is plausibly more effective at preventing pathogen escape, but it has to be high-volume enough to pass a sneeze without it blowing out along the face.
And, as I’ve said above, I think that it’s not sufficiently safe to assume that they inactivate within seconds of drying out.
Publicly visible posts seem like the exception rather than the rule, and it seems odd to anticipate that people will regularly take steps to observe comments by people that they have blocked.
As I understand the mechanics, people who block a lot of people are equal to people who have been blocked by a lot of people, in terms of what they can see in such a discussion.
No, and yes.
For mousetraps, it has the implicit problems with ignoring the effects of marketing on customer decision.
For academic theories, it has the implicit problem where the *consumers* (the scientific community, which is presented with multiple theories and forms a consensus around zero or more of them) is being misstated as the producer.
You can get paid for seeing an error in someone’s mousetrap design and making a design that lacks that error, even if the error is “bad marketing”. You can’t get paid for seeing an error in a mousetrap buyer, especially if that error is “ignores the exemplary marketing or otherwise makes decisions based on the wrong factors”.
Academia pays in prestige, but it doesn’t pay in prestige for “doesn’t join or change the consensus”; it pays a little bit to join the consensus and a fair bit more for publishing results that change the consensus, but joining it is much easier than changing it.