Right, they would certainly do it if you paid them enough (and lowering the fee is a form of payment); this is a reason why the price would be higher.
It sounds like the problem is that mulligans are necessary to ensure there’s a game to play, which depends mostly on having a reasonable number of lands, but that they have bad side-effects which are mostly the result of giving too much control over which nonland cards you have. So I propose the following mulligan rule:
On your first mulligan, draw five, then choose one: draw a card, or search your library for a basic land card, reveal it, and put it into your hand.
On your second mulligan, draw three, then choose as before, twice.
On your third mulligan, draw one, then choose as before three times.
There is no fourth mulligan.
This makes mulligans much better at ensuring you can play, and much worse at ensuring you can find a particular card or combo that you’re looking for.
(For complexity reasons, this rule would work better if there were a keyword for “either draw or fetch a land”, and if it were introduced in advance.)
Stores don’t want to do this for the same reason they make prices that change frequently, are one cent off from round numbers, and are in the least legible font that is legally permissible. They want paying attention to prices to be inconvenient, because paying attention decreases spending and shifts that spending towards lower margin items.
1. For health-related research, one of the main failure modes I’ve observed when people I know try to do this, is tunnel vision and a lack of priors about what’s common and relevant. Reading raw research papers before you’ve read broad-overview stuff will make this worse, so read UpToDate first and Wikipedia second. If you must read raw research papers, find them with PubMed, but do this only rarely and only with a specific question in mind.
2. Before looking at the study itself, check how you got there. If you arrived via a search engine query that asked a question or posed a topic without presupposing an answer, that’s good; if there are multiple studies that say different things, you’ve sampled one of them at random. If you arrived via a query that asked for confirmation of a hypothesis, that’s bad; if there are multiple studies that said different things, you’ve sampled in a way that was biased towards that hypothesis. If you arrived via a news article, that’s the worst; if there are multiple studies that said different things, you sampled in a way that was biased opposite reality.
3. Don’t bother with studies in rodents, animals smaller than rodents, cell cultures, or undergraduate psychology students. These studies are done in great numbers because they are cheap, but they have low average quality. The fact that they are so numerous makes the search-sampling problems in (2) more severe.
4. Think about what a sensible endpoint or metric would be before you look at what endpoint/metric was reported. If the reported metric is not the metric you expected, this will often be because the relevant metric was terrible. Classic examples are papers about battery technologies reporting power rather than capacity, biomedical papers reporting effects on biomarkers rather than symptoms or mortality.
5. Correctly controlling for confounders is much, much harder than people typically give it credit for. Adding extra things to the list of things controlled for can create spurious correlations, and study authors are not incentivized to handle this correctly. The practical upshot is that observational studies only count if the effect size is very large.
(We forgot to run one of the migration scripts)
Same cadence, but separating them does make sense and I might add that option in the future.
Nuclear power is typically located close to power demands, ie cities, because of the costs and losses in transporting power over long distances. This also limits the size/scale of nuclear power plants, since if you build larger than the demands of a city, you have to transport the power over a long distance to find more sinks.
On the other hand, suppose a city were built specifically for the purpose of hosting nuclear power, carbon-capture, and CO2-to-fuel plants. Such a city might be able to have significantly cheaper nuclear power, since being far away from existing population centers would lower safety and regulatory costs, and concentrating production in one place might enable new economies of scale.
It seems like there are two worlds we could be in, here. In one world, nuclear power right now is like the space launch industry of a decade ago: very expensive, but expensive because of institutional failure and a need for R&D, rather than fundamental physics. In the other world, some component of power plants (steam turbines, for example) is already optimized close to reasonable limits, so an order of magnitude is not possible. Does anyone with engineering knowledge of this space have a sense of which is likely?
Something that confuses me. This (and also other) discussions of afforestation focus on planting trees (converting non-forest biomes into forest). But it seems like trees do a reasonably good job of spreading themselves; why not instead go to existing forests, cut down and bury any trees that are past their peak growth phase, and let the remaining trees plant the replacements? How much does the cutting and burial itself cost? (I see an estimate of $50/t for burying crop residues, which seems like it would be similar.)
I expect to have some questions and ideas, but I’m still working my way through this, as I suspect are others. I really appreciate how in depth this is!
Under Wikipedia’s rules, yes.
I expect that knowing you’re having anaphylaxis without a solution is already reasonably close to the upper end of psychological stress, and you can’t add that much more. The reason the epinephrine concentrations are so much higher in cardiac arrest patients is not because cardiac arrest is psychologically stressful, it’s because epinephrine release is triggered by hypoxia.
Curated. I think the meta-frame (frame of looking at frames) is key to figuring out a lot of important outstanding questions. In particular, some ideas are hard to parse or generate in some frames and easy to parse or generate in others. Most people have frame-blind-spots, and lose access to some ideas that way. There also seem to be groups within the rationality community that are feeling alienated, because too many of their conversations suffer from frame-mismatch problems.
Lizardman’s Constant is an observation seen in polls of unfiltered groups of people, but the people who were given the launch codes were selected for trustworthiness.
(This thread is our collective reenactment of the conversations about nuclear safety that happened during the cold war.)
Multiple large monitors, for programming.
Waterproof paper in the shower, for collecting thoughts and making a morning todo list
Email filters and Priority Inbox, to prevent spurious interruptions while keeping enough trust that urgent things will generate notifications, that I don’t feel compelled to check too often
USB batteries for recharging phones—one to carry around, one at each charging spot for quick-swapping
Yep, one of us edited it to fix the link. Added a GitHub issue for dealing with relative links in RSS in general: https://github.com/LessWrong2/Lesswrong2/issues/2434 .
Note that this would be a very non-idiomatic way to use jQuery. More typical architectures don’t do client-side templating; they do server-side rendering and client-side incremental mutation.
I’m kinda confused about the relation between cryptography people and security mindset. Looking at the major cryptographic algorithm classes (hashing, symmetric-key, asymmetric-key), it seems pretty obvious that the correct standard algorithm in each class is probably a compound algorithm—hash by xor’ing the results of several highly-dissimilar hash functions, etc, so that a mathematical advance which breaks one algorithm doesn’t break the overall security of the system. But I don’t see anyone doing this in practice, and also don’t see signs of a debate on the topic. That makes me think that, to the extent they have security mindset, it’s either being defeated by political processes in the translation to practice, or it’s weirdly compartmentalized and not engaged with any practical reality or outside views.
In my experience, the motion that seems to prevent mental crowding-out is intervening on the timing of my thinking: if I force myself to spend longer on a narrow question/topic/idea than is comfortable, eg with a timer, I’ll eventually run out of cached thoughts and spot things I would have otherwise missed.