Perhaps hobbies/careers that involve crafting a physical object have some built-in psychological advantage in generating feelings of fulfillment compared to other hobbies/careers?
Pretentious Penguin
Do you view moral agency as something binary, or do you think entities can exist on a continuous spectrum of how agentic they are? From this post and the preceding one, I’m not sure whether you have any category for “more agentic than a cat but less agentic than myself”.
I’m not sure “proximity” is the best word to describe the Good Samaritan’s message. I think “ability to help” would more centrally describe what it’s getting at, though of course prior to the creation of modern telecommunications, globalized financial systems, etc. “proximity” and “ability to help” were very strongly correlated.
I think for many philosophers, the claim “abstract objects are real” doesn’t depend on the use of mathematics to model physical reality. I think considering pure math is more illustrative of this point of view.
Andrew Wiles once described the experience of doing math research as:
“Perhaps I could best describe my experience of doing mathematics in terms of entering a dark mansion. You go into the first room and it’s dark, completely dark. You stumble around, bumping into the furniture. Gradually, you learn where each piece of furniture is. And finally, after six months or so, you find the light switch and turn it on. Suddenly, it’s all illuminated and you can see exactly where you were. Then you enter the next dark room...”
Since this is also what it feels like to study an unfamiliar part of physical reality, it’s intuitive to think that the mathematics you’re studying constitutes some reality that exists independently of human minds. Whether this intuition is actually correct is a rather different question…
farming and science and computers and rocket ships and everything else, none of which has any straightforward connection to tasks on the African savannah.
Farming does have a straightforward connection to techniques used by hunter-gatherers to gather plants more effectively. From page 66 of “Against the Grain: A Deep History of the Earliest States” by James C. Scott:
… hunters and gatherers, as we have seen, have long been sculpting the landscape: encouraging plants that will bear food and raw materials later, burning to create fodder and attract game, weeding natural stands of desirable grains and tubers. Except for the act of harrowing and sowing, they perform all the other operations for wild stands of cereals that farmers do for their crops.
I don’t think “inject as much heroin as possible” is an accurate description of the value function of heroin addicts. I think opioid addicts are often just acting based off of the value function “I want to feel generally good emotionally and physically, and don’t want to feel really unwell”. But once you’re addicted to opioids the only way to achieve this value in the short term is to take more opioids.
My thinking on this is influenced by the recent Kurzgesagt video about fentanyl: https://www.youtube.com/watch?v=m6KnVTYtSc0.
If you were to start yearning for children, you would either (a) be able to resist the yearning, or (b) be unable to resist the yearning and choose to have kids. In case (a), resisting might be emotionally unpleasant, but I don’t think it’s worth being “terrified of”. In case (b), you might be misunderstanding your terminal goals, or else the approximation that all of the squishy stuff that comprises your brain can be modeled as a rational agent pursuing some set of terminal goals breaks down.
In what sense does the Society of Friends require more commitment than Unitarian Universalist or humanist churches do?
Neat!
In the linked example, I don’t think “expert consensus” and “groupthink” are two ways to describe the same underlying reality with different emotional valences. Groupthink describes a particular sociological model of how a consensus was reached.
What about the physical process of offering somebody a menu of lotteries consisting only of options that they have seen before? Or a 2-step physical process where first one tells somebody about some set of options, and then presents a menu of lotteries taken only from that set? I can’t think of any example where a rational-seeming preference function doesn’t obey IIA in one of these information-leakage-free physical processes.
I think you’re interpreting the word “offer” too literally in the statement of IIA.
Also, any agent who chooses B among {A,B,C} would also choose B among the options {A,B} if presented with them after seeing C. So I think a more illuminating description of your thought experiment is that an agent with limited knowledge has a preference function over lotteries which depends on its knowledge, and that having the linguistic experience of being “offered” a lottery can give the agent more knowledge. So the preference function can change over time as the agent acquires new evidence, but the preference function at any fixed time obeys IIA.
To clarify the last part of your comment, the ratio of the probability of the Great Filter being in front of us to the probability of the Great Filter being behind tool-using intelligent animals should be unchanged by this update, right?
It should be noted that the psychologists and anthropologists in the above tables were not selected based on winning a Nobel prize, nor any prize. On pages 51-52 of The Making of a Scientist Roe writes
For the psychologists the preliminary list was made up by me in consultation, separately, with Dr. E. G. Boring and Dr. David Shakow. We simply went over the membership list of the American Psychological Association and put down everyone we knew to be actively engaged in research and otherwise qualified. This preliminary list was then rated, in the usual fashion, by Dr. Boring, of Harvard University, [...]
and then lists a bunch of other professors involved in rating the list, and “the men who ranked at the top were selected, with some adjustment so as to include representatives of different sorts of psychology.”
(Incidentally, I wonder whether Professor Boring’s lectures lived up to his name.)
Nobel prize winners (especially those in math and sciences) tend to have IQs significantly above the population average.
There is no Nobel prize in math. And the word “especially” would imply that there exists data on the IQs of Nobel laureates in literature and peace which shows a weaker trend than the trend for sciences laureates; has anybody ever managed to convince a bunch of literature Nobel laureates to take IQ tests? I can’t find anything by Googling, and I’m skeptical.
To be clear, the general claim that people who win prestigious STEM awards have above-average IQs is obviously true.
The title of this post was effectively clickbait for me, since my primary thought in clicking on it was “I wonder what claim the post will make about the foundations of quantum mechanics”, but then I discovered this topic is relegated to a follow-up post. Maybe “Chance is in the map, not the (classical) territory” or “Chance is in the map, not the territory: Part 1” would’ve been better titles?
So would it be accurate to say that a preference over lotteries (where each lottery involves only real-valued probabilities) satisfies the axioms of the VNM theorem (except for the Archimedean property) if and only if that preference is equivalent to maximizing the expectation value of a surreal-valued utility function?
Re the parent example, I agree that changing in an expectable way is problematic to rational optimizing, but I think “what kind of agent am I happy about being?” is a distinct question from “what kinds of agents exist among minds in the world?”.
If you’re on macOS and still want caps lock to be accessible for the rare occasions when you want it, you can use Karabiner-Elements to swap the caps lock key and the escape key.
What is the precise statement for being able to use surreal numbers when we remove the Archimedean axiom? The surreal version of the VNM representation theorem in “Surreal Decisions” (https://arxiv.org/abs/2111.00862) seems to still have a surreal version of the Archimedean axiom.
Re the parent example, I was imagining that the 2-priority utility function for the parent only applied after they already had children, and that their utility function before having children is able to trade off between not having children, having some who live, and having some who die. Anecdotally it seems a lot of new parents experience diachronic inconsistency in their preferences.
It seems to me that the “continuity/Archimedean” property is the least intuitively necessary of the four axioms of the VNM utility theorem. One way of specifying preferences over lotteries that still obeys the other three axioms is assigning to each possible world two real numbers and instead of one, where is a “top priority” and is a “secondary priority”. If two lotteries have different , the one with greater is ranked higher, and is used as a tie-breaker. One possible real-world example (with integer-valued for deterministic outcomes) would be a parent whose top priority is minimizing the number of their children who die within the parent’s lifetime, with the rest of their utility function being secondary.
I’d be interested in whether there exist any preferences over lotteries quantifying our intuitive understanding of risk aversion while still obeying the other three axioms of the VNM theorem. I spent about an hour trying to construct an example without success, and suspect it might be impossible.
Which airlines make you pay when they force you to check your bag due to running out of overhead bin space? I frequently have to check my bag intended for the overhead bin due to being among the last to board, and I’ve never been charged a fee for this.