Or you could just take more, so that the nervousness is swamped by the general handshakery...
Technologos
Seth appears to be contrasting a “job” with things like “being an entrepreneur in business for oneself,” so perhaps the first of your options.
I think much of the problem here comes from something of an equivocation on the meaning of “economic disaster.” A country can post high and growing GDP numbers without benefiting its citizens as much as a country with weaker numbers; the linked paper notes that
real per capita private consumption was lower than straight GDP per capita figures suggest because of very high investment rates and high military expenditures, and the quality of goods that that consumption expenditure could bring was even lower still.”
Communism is good at maintaining top-line growth in an economy because it can simply mandate spending. In much the same way as US government spending can directly add to GDP growth (even if incurring substantial debt), the Soviet Union could make massive military expenditures even while running factories that produced goods not based on consumer desires but state beliefs about those desires or needs.
In short, communism was not an economic disaster in that it effectively industrialized a great many nations and brought consistent top-line growth. It was an economic disaster in that state power allowed or created widespread famines and poor production of consumer goods.
My understanding is that one primary issue with frequentism is that it can be so easily abused/manipulated to support preferred conclusions, and I suspect that’s the subject of the article. Frequentism may not have “caused the problem,” per se, but perhaps it enabled it?
And in particular, there’s good reason to believe that brains are still evolving at a decent pace, where it looks like cell mechanisms largely settled a long while back.
Oh, I meant that saying it was going to torture you if you didn’t release it could have been exactly what it needed to say to get you to release it.
Perhaps it does—and already said it...
What you say is true while the Koran and the Bible are referents, but when A and B become “Mohammed is the last prophet, who brought the full truth of God’s will” and “Jesus was a literal incarnation of God,” (the central beliefs of the religions that hold the respective books sacred) then James’ logic holds.
I realize how arrogant it must seem for young, uncredentialled (not even a Bachelor’s!) me to conclude that brilliant professional philosophers who have devoted their entire lives to studying this topic are simply confused. But, disturbing as it may be to say … that’s how it really looks.
Perhaps the fact that they have devoted their lives to a topic suggests that they have a vested interest in making it appear not to be nonsense. Cognitive dissonance can be tricky even for the pros.
What if the problem was reframed such that nobody ever found out about the decision and thereby that their estimates of risk remained unchanged?
I presented a reason why intuitions treat the scenarios differently, and why the intuitions are correct in doing so. That reason is consistent with the problem as stated. Assumption of risk most certainly is a factor, and a justifiable one.
It is certainly possible that there is some underlying utilitarian rationale being used. Reframing the problem like I suggest above might provide something of a test of the reason you provided, if imperfect (can we really ignore intuitions on command?).
I have a different interpretation of the LCPW here, though. The LCPW is supposed to be the one that isolates the moral quantity of interest—in this case, the decision to push or not, or to switch tracks—and is specifically designed to exclude answers that consider factors (realistic or not) that sidestep the issue.
I’d say the LCPW is one in which nobody will ever hear about the decision, and thus in which any ancillary effects are neutralized.
buying life insurance
For what it’s worth, I’ve heard people initially had many of the same hangups about life insurance, saying that they didn’t want to gamble on death. The way that salespeople got around that was by emphasizing that the contracts would protect the family in event of the breadwinner’s death, and thus making it less of a selfish thing.
I wonder if cryo needs a similar marketing parallel. “Don’t you want to see your parents again?”
Could you supply a (rough) probability derivation for your concerns about dystopian futures?
I suspect the reason people aren’t bringing those possibilities up is that, through a variety of elements including in particular the standard Less Wrong understanding of FAI derived from the Sequences, LWers have a fairly high conditional probability Pr(Life after cryo will be fun | anybody can and bothers to nanotechnologically reconstruct my brain) along with at least a modest probability of that condition actually occurring.
Does anyone really expect that this population would not respond to its incentives to avoid more danger? Anecdotes aside; do you expect them to join the military with the same frequency, be firemen with the same frequency, to be doctors administering vaccinations in jungles with the same frequency?
Agreed—indeed, I suspect that one of the first steps to fundamentally altering the priorities of society may be the invention of methods to materially prolong life, such that it really does become an unspeakable tragedy to lose somebody permanently.
I was the lead developer of an AGI that is scheduled to hit start in three weeks. I quit when I saw that the ‘Friendliness’ intended is actually a dystopia and my protests were suppressed. I have just cancelled my cryonics membership and the reason your cryonic revival is dependent on killing me is that I am planning to sabotage the AI.
Is it weird that my first reaction is to ask her specific questions about the Sequences to test the likelihood of that statement’s veracity?
Your opponent must not see (consciously or subconsciously) your rhetoric as an attempt to gain status at zir expense.
To quote Daniele Vare: “Diplomacy is the art of letting someone have your way.”
Agreed, and I suspect that certainty and abrasive attributes are also less problematic if truth is not being sought after.
This would be entirely true if instead of utiles you had said dollars or other resources. As it is, it is false by definition: if two choices have the same expected utility (expected value of the utility function) then the chooser is indifferent between them. You are taking utility as an argument in something like a meta-utility function, which is an interesting discussion to have (which utility function we might want to have) but not the same as standard decision theory.
I think the uncomfortable part is that bill’s (and my) experience suggests that people are even more risk-averse than logarithmic functions would indicate.
I’d suggest that any consistent function (prospect theory notwithstanding) for human utility functions is somewhere between log(x) and log(log(x))… If I were given the option of a 50-50 chance of squaring my wealth and taking the square root, I would opt for the gamble.
VNM utility is a necessary consequence of its axioms but doesn’t entail a unique utility function; as such, the ability to prevent Dutch Books is derived more from VNM’s assumption of a fixed total ordering of outcomes than anything.