Just this guy, you know?
I think you’re undervaluing the cultural expectations of availability and cooperation during core work hours. And the value to workers of contractual pay (for which employers demand contractual hours). You’re also forgetting the hidden-value alignment brought on by the expectation of a recurring long-term relationship. It’s hard to monitor most work in the short term, so having the engagements be longer-term makes it possible to adjust job and compensation based on years’ of output rather than the latest delivery.
There certainly is more work than many think which can effectively be done piecemeal. But there’s lots more than you seem to acknowledge that is pretty well optimized by current norms.
Watchmen was pretty good on this front. Worm (https://parahumans.wordpress.com/) is LONG, but great.
Should you punish people for wronging others, or for making the wrong call about wronging others?
This is a topic where the answer depends a whole lot on how generally you’re asking, and what your moral and decision framework is. The key ambiguity is “should”. “what should you do” has been an open question for millennia.
The obvious consequentialist answer is that you should do either, both, or neither, depending on circumstance and your expected net impact of your action. Other moral frameworks likely have different answers.
The signal I prefer to send on the topic, intended to encourage a mechanical, reductionist view of the universe, is that punishment should be automatic and non-judgemental, with as little as possible speculation as to motive or reasoning. If it caused harm, the punishment should be proportional to the harm. Yes, this lets luck and good-intentioned mistakes control more than an omniscient god might prefer. But I don’t have one of those, so I prefer to eliminate the other human failings of bad judgement that come with humans making a punishment call putatively based on inference about reasoning or motivation, but actually mostly based on biases and guesses.I’m OK with adding punishment for very high-risk behaviors that don’t happen to cause harm in the observed instance. I don’t have a theory for how to remove human bias from that part of judgement, but I also don’t trust people enough to do without it.
Beautiful. I love supervillain storylines where I root against the heroes because the writers haven’t done the math.
Realistic threat modeling takes into account severity and duration, not just probability distribution. “had to request outside supplies” describes normal life for most of us, not a situation to prep for.
The thing that’s hard to model is the wide-scale systemic fragility in the modern world. Collapse could easily go deeper than expected, and then there’s no “outside supplies” to be had. It’s very (very!) hard to predict the specific edges of that scenario that would let your individual preparation be effective.
unless there is a chance of punishment or being hurt by victim’s self-defense or a chance of better alternative interaction with given person.
There always is that chance. That’s mostly our disagreement. Using real-world illustrations (murder) for motivational models (utility) really needs to acknowledge the uncertainty and variability, which the vast majority of the time “adds up to normal”. There really aren’t that many murders among strangers. And there are a fair number of people who don’t value others’ very highly.
That’s my take as well. “estimating the probability” really means “calculating the plausibility based on this knowledge”.
I doubt it’s a difference in utility function (the value placed on a given state of the world), but in the distribution of predicted event. If you’re worried about a sudden event that disrupts everything for a few weeks to months, then surviving that with stored food and defense is what you plan for. If you’re worried about a complete collapse to pre-industrial capacity, then you think about seeds and skills to make rebuilding slightly faster (even if not in your lifetime, perhaps your grandkids). What the second group forgets is that even if it’s a long-term collapse, there’s still 1-10 years of short-term violence and starving people trying to eat your seed stock. You may not need lots of guns, but you need lots of people you trust who have guns.
Having lots of potable water around is definitely a good start. That’s one of the reasons I argue against tankless water heaters—having 40-60 gallons of stored water is rarely a bad thing. If you don’t have a few weeks’ worth of food and medicine that doesn’t require electricity to eat, that’s also valuable in a very wide range of scenarios.
Haven’t looked, so maybe they solved this, but my primary concern with putting any of my money behind that kind of contract-validation is that I don’t understand why we’d believe the predicted evaluation matches the actual evaluation of the contract. Either the contract is objective and you don’t want juries at all, or the contract is subjective but the distribution of test juries is likely very different from the actual jury.
I often do stop at “hello”. I also sometime use “what ho!” or other archaic/amusing variations. But most of the time some form of “howzit / how are you / how’s it going” is a lighweight conventional conversational handoff to let them know it’s their turn to speak.
In many conversations, a more direct/specific question would be offputting and unhelpful. It adds pressure to be clever or informative, and takes away options to remain brief and lightweight (“fine. you?”). I’d prefer most social contacts to start out with this negotiation—each participant has an opportunity to inject important/urgent topics, and only if both pass do you push harder for a non-urgent discussion topic.
Note, this is all rationalization, not rationality—I’ve had bad luck with trying to start conversations more quickly, and this is my reasoning for why that is. I’m sticking with simple, short, ambiguous/meaningless introductory phrases (mostly—there are LOTS of exceptions. Don’t be boring on an early date, for instance—your goal there is to seem interesting and find topics of mutual conversation quickly.
I’m not sure what our disagreement actually is—I agree with your summary of Ayn Rand, I agree that there are lots of ways to hurt people without stabbing. I’m not sure you’re claiming this, but I think that failure to help is selfish too, though I’m not sure it’s comparable with active harm.
It may be that I’m reacting badly to the use of “truly selfish”—I fear a motte-and-bailey argument is coming, where we define it loosely, and then categorize actions inconsistently as “truly selfish” only in extremes, but then try to define policy to cover far more things.
I think we’re agreed that the world contains a range of motivated behaviors, from sadistic psychopaths (who have NEGATIVE nonzero terms for others’ happiness) to saints (whose utility functions weight very heavily toward other’s happiness over their own). I don’t know if we agree that “second-order effect” very often dominate the observed behaviors over most of this range. I hope we agree that almost everyone changes their behavior to some extent based on visible incentives.
I still disagree with your post that a coefficient of 0 for you in someone’s mind implies murder for pocket change. And I disagree with the implication that murder for pocket change is impossible even if the coefficient is above 0 - circumstances matter more than innate utility function.To the OP’s point, it’s hard to know how to accomplish “make people less selfish”, but “make the environment more conducive to positive-sum choices so selfish people take cooperative actions” is quite feasible.
I was somewhat unimpressed. It’s hard to take discussion of “mainstream economics” very seriously—there are a LOT of different parts of economics study, and this author doesn’t explain what theory or policy he’s concerned about.
To the extent that economics overlaps with politics, it certainly is subject to many of the same failings—oversimplification, pandering, and selective choice of theory as truth. I blame politics more than economics for those things.
You can’t hypothesize zeros and get anywhere. MANY MANY psychopaths exist, and very few of them find it more effective to murder people for spare change than to further their ends in other ways. They may not care about you, but your atoms are useful to them in their current configuration.
Judgement of evil follows the same pressures as evil itself. Selfishness feels different from sadism to you, at least in part because it’s easier to find cooperative paths with selfishness. And this question really does come down to “when should I cooperate vs defect”.
It doesn’t need to be linear (both partial-correlation of desires, and declining marginal desire are well-known), but the only alternative to aggregation in incoherency.
I think you’d be on solid ground if you argue that humans have incoherent values, and this is a fair step in that direction.
MIght not even want to imply that it’s the main or only argument. Maybe “this particular argument is invalid”.
Do you have a link to that argument? I think Bayesean updates include either reducing a prior or increasing it, and then renormalizing all related probabilities. Many updatable observations take the form of replacing an estimate of future experience (I will observe sunshine tomorrow) by a 1 or zero (I did or did not observe that, possibly not quite 0 or 1 if you want to account for hallucinations and imperfect memory).
Anthropic updates are either bayesean or impossible. The underlying question remains “how does this experience differ from my probability estimate”? For Bayes or for Solomonoff, one has to answer “what has changed for my prediction? In what way am I surprised and have to change my calculation?”
I mostly agree, but the underlying difficulty is not technical implementation, but social (and legal) acceptance. It’s almost impossible to explain the topic to a layperson who’s worried about it but not very sophisticated. And it’s very hard, even for experts, to define “good” and “bad” uses of anonymized-but-segmented (by business/interest/demographics/etc) data.
I’m not sure you can separate reasons very cleanly, and I’m pretty sure you can’t believe individual declarations of their reasons for punishment, let alone groups. Punishment behaviors and mechanisms have evolved over a very very long time, and there is no simple causality to deconstruct.
also, very minor correction: “Prevention” is the correct word, without the extra “ta”.
To an individual human, death by AI (or by climate catastrophe) is worse than old age “natural” death only to the extent that it comes sooner, and perhaps in being more violent. To someone who cares about others, the large number of looming deaths is pretty bad. To someone who cares about the species, or who cares about quantity of sentient individuals, AI is likely to reduce total utility by quite a bit.
To someone who loves only abstract intelligence and quantifies by some metric I don’t quite get, AI may be just as good as (or better than) people.