There is a third option: consuming rationalist-branded content for entertainment value and interest.
Pretentious Penguin
the minimum distance between a compact and a connected set is achieved by some pair of points in the sets
I don’t think this is true. For a counterexample in the plane, let A be the set consisting of the point (2,0) and let B be the open unit disk centered at the origin. A is compact, B is connected, and the infimum of {distance from x to y where x is in A and y is in B} is 1. But the distance from A to any point in B is strictly greater than 1.
The manufacturer of Ezekiel bread claims that it has a glycemic index of 36. It’s fair to say that the exact details don’t matter, but that’s almost a factor of 2 off from your table. And qualitatively it puts Ezekiel bread firmly in the range of medieval breads on your table.
https://www.foodforlife.com/low-glycemic-index-bread.htm
a density matrix whose off-diagonal elements are all zero is “decohered”, and can be considered the classical limit of this. A decohered density matrix behaves exactly like a classical distribution, and follows classic Markovian dynamics;
I don’t think this bullet point is accurate. Any pure state will have all its off-diagonal elements be zero in a basis where that state is one of the basis vectors, but it’s not fair to say that any pure state “behaves exactly like a classical distribution”. I suppose it would be more accurate to say that a state whose off-diagonal entries are all zero in some basis will look classical with respect to dynamics and measurements in that basis, but that concept is hard to explain unless the idea of observables corresponding to Hermitian operators has already been explained.
The reasoning of this post makes Ezekiel bread look like a good option — it’s a combination of whole grains and legumes based on a biblical description of what Israelites might eat while in a besieged city. It has 6 grams of protein per slice.
(The recipe in the Bible also specified it be cooked using human feces or cow dung as fuel, but I don’t think the industrially produced good today follows this part of the recipe. (It also has added wheat gluten and uses soybeans as a type of beans where the Bible doesn’t specify a variety.))
Death is regarded as tragic, and once someone has a serious condition, people invest in fighting. Up close, people try to delay death.
This oversimplifies the diversity of human values around death. For a lot of people, deaths are divided into “bad deaths / premature deaths” and “good deaths / appropriate deaths / timely deaths”. Hence why few people experience significant sadness when hearing about the death of a nonagenarian, and why many elderly people adopt a relaxed attitude toward their impending demise.
It’s possible that these people feel this way only because they cannot imagine living a very long time without severe disability, and/or to prioritize a limited “grief budget” towards the saddest deaths, and that properly contemplating the possibility of defeating death altogether would disabuse them of their notion that some deaths are basically fine.
But predicting how human values generalize out of distribution when technology enables new possibilities is a hard problem, so why should we assume that people’s “true” values around death are that it is always tragic and worth preventing?
Which airlines make you pay when they force you to check your bag due to running out of overhead bin space? I frequently have to check my bag intended for the overhead bin due to being among the last to board, and I’ve never been charged a fee for this.
Perhaps hobbies/careers that involve crafting a physical object have some built-in psychological advantage in generating feelings of fulfillment compared to other hobbies/careers?
Do you view moral agency as something binary, or do you think entities can exist on a continuous spectrum of how agentic they are? From this post and the preceding one, I’m not sure whether you have any category for “more agentic than a cat but less agentic than myself”.
I’m not sure “proximity” is the best word to describe the Good Samaritan’s message. I think “ability to help” would more centrally describe what it’s getting at, though of course prior to the creation of modern telecommunications, globalized financial systems, etc. “proximity” and “ability to help” were very strongly correlated.
I think for many philosophers, the claim “abstract objects are real” doesn’t depend on the use of mathematics to model physical reality. I think considering pure math is more illustrative of this point of view.
Andrew Wiles once described the experience of doing math research as:
“Perhaps I could best describe my experience of doing mathematics in terms of entering a dark mansion. You go into the first room and it’s dark, completely dark. You stumble around, bumping into the furniture. Gradually, you learn where each piece of furniture is. And finally, after six months or so, you find the light switch and turn it on. Suddenly, it’s all illuminated and you can see exactly where you were. Then you enter the next dark room...”
Since this is also what it feels like to study an unfamiliar part of physical reality, it’s intuitive to think that the mathematics you’re studying constitutes some reality that exists independently of human minds. Whether this intuition is actually correct is a rather different question…
farming and science and computers and rocket ships and everything else, none of which has any straightforward connection to tasks on the African savannah.
Farming does have a straightforward connection to techniques used by hunter-gatherers to gather plants more effectively. From page 66 of “Against the Grain: A Deep History of the Earliest States” by James C. Scott:
… hunters and gatherers, as we have seen, have long been sculpting the landscape: encouraging plants that will bear food and raw materials later, burning to create fodder and attract game, weeding natural stands of desirable grains and tubers. Except for the act of harrowing and sowing, they perform all the other operations for wild stands of cereals that farmers do for their crops.
I don’t think “inject as much heroin as possible” is an accurate description of the value function of heroin addicts. I think opioid addicts are often just acting based off of the value function “I want to feel generally good emotionally and physically, and don’t want to feel really unwell”. But once you’re addicted to opioids the only way to achieve this value in the short term is to take more opioids.
My thinking on this is influenced by the recent Kurzgesagt video about fentanyl: https://www.youtube.com/watch?v=m6KnVTYtSc0.
If you were to start yearning for children, you would either (a) be able to resist the yearning, or (b) be unable to resist the yearning and choose to have kids. In case (a), resisting might be emotionally unpleasant, but I don’t think it’s worth being “terrified of”. In case (b), you might be misunderstanding your terminal goals, or else the approximation that all of the squishy stuff that comprises your brain can be modeled as a rational agent pursuing some set of terminal goals breaks down.
In what sense does the Society of Friends require more commitment than Unitarian Universalist or humanist churches do?
Neat!
In the linked example, I don’t think “expert consensus” and “groupthink” are two ways to describe the same underlying reality with different emotional valences. Groupthink describes a particular sociological model of how a consensus was reached.
What about the physical process of offering somebody a menu of lotteries consisting only of options that they have seen before? Or a 2-step physical process where first one tells somebody about some set of options, and then presents a menu of lotteries taken only from that set? I can’t think of any example where a rational-seeming preference function doesn’t obey IIA in one of these information-leakage-free physical processes.
I think you’re interpreting the word “offer” too literally in the statement of IIA.
Also, any agent who chooses B among {A,B,C} would also choose B among the options {A,B} if presented with them after seeing C. So I think a more illuminating description of your thought experiment is that an agent with limited knowledge has a preference function over lotteries which depends on its knowledge, and that having the linguistic experience of being “offered” a lottery can give the agent more knowledge. So the preference function can change over time as the agent acquires new evidence, but the preference function at any fixed time obeys IIA.
To clarify the last part of your comment, the ratio of the probability of the Great Filter being in front of us to the probability of the Great Filter being behind tool-using intelligent animals should be unchanged by this update, right?
Sorry, I didn’t mean to say that there were exactly three options. Rather, I meant to say there’s at least one additional option outside of the dichotomy you set up in your original short form. Though perhaps I misinterpreted what you said when I read it as dichotomous.