The trouble is that tradition is undocumented code, so you aren’t sure what is safe to change when circumstances change.
hg00
Seems like a bad comparison, since, as an atheist, you don’t accept the Bible’s truth, so the things the preacher is saying are basically spam from your perspective. There’s also no need to feel self-conscious or defend your good-person-ness to this preacher, as you don’t accept the premises he’s arguing from.
Yes, and the preacher doesn’t ask me about my premises before attempting to impose their values on me. Even if I share some or all of the preacher’s premises, they’re trying to force a strong conclusion about my moral character upon me and put my reputation at stake without giving me a chance to critically examine the logic with which that conclusion was derived or defend my reputation. Seems like a rather coercive conversation, doesn’t it?
Does it seem to you that the preacher is engaging with me in good faith? Are they curious, or have they already written the bottom line?
I think I see a motte and bailey around what it means to be a good person. Notice at the beginning of the post, we’ve got statements like
Anita reassured Susan that her comments were not directed at her personally
...
they spent the duration of the meeting consoling Susan, reassuring her that she was not at fault
And by the end, we’ve got statements like
it’s quite hard to actually stop participating in racism… In societies with structural racism, ethical behavior requires skillfully and consciously reducing harm
...
almost every person’s behavior is morally depraved a lot of the time
...
What if there are bad things that are your fault?
...
accept that you are irredeemably evil
Maybe Susan knows on some level that her colleagues aren’t being completely honest when they claim to think she’s not at fault. Maybe she correctly reads conversational subtext suggesting she is morally depraved, bad things are her fault, and she is irredeemably evil. This could explain why she reacts so negatively.
The parallel you draw to Calvinist doctrine is interesting. Presumably most of us would not take a Christian preacher very seriously if they told us we were morally depraved. As an atheist, when a preacher on the street tells me this, I see it as an unwelcome attempt to impose their values on me. I don’t tell the preacher that I accept the fact that I’m irredeemably evil, because I don’t want to let the preacher browbeat me into changing the way I live my life.
Now suppose you were accosted by such a preacher, and when you responded negatively, they proclaimed that your choice to defend yourself (by telling about times when you worked to make the world a better place, say) was further evidence of your depravity. The preacher brings out their bible and points to a verse which they interpret to mean “it is a sin to defend yourself against street preachers”. How do you react?
Seems like a bit of a Catch-22 eh? The preacher has created a situation where if I accept their conversational frame, I’m considered a terrible person if I don’t do whatever they say. See numbers 13, 18 and 21 on this list.
Maybe you’re right, I haven’t seen it used much in practice. Feel free to replace “Something like Nonviolent Communication” with “Advice for getting along with people” in that sentence.
Agreed. Also, remember that conversations are not always about facts. Oftentimes they are about the relative status of the participants. Something like Nonviolent Communication might seem like tone policing, but through a status lens, it could be seen as a practice where you stop struggling for higher status with your conversation partner and instead treat them compassionately as an equal.
Just saw this Facebook group for getting papers. There’s also this. And https://libkey.io/
Interesting post. I think it might be useful to examine the intuition that hierarchy is undesirable, though.
It seems like you might want to separate out equality in terms of power from equality in terms of welfare. Most of the benefits from hierarchy seem to be from power inequality (let the people who are the most knowledgable and the most competent make important decisions). Most of the costs come in the form of welfare inequality (decision-makers co-opting resources for themselves). (The best argument against this frame would probably be something about the average person having self-actualization, freedom, and mastery of their destiny. This could be a sense in which power equality and welfare equality are the same thing.)
Robin Hanson’s “vote values, bet beliefs” proposal is an intriguing way to get the benefits of inequality without the costs. You have the decisions being made by wealthy speculators, who have a strong financial incentive to leave the prediction market if they are less knowledgable and competent than the people they’re betting against. But all those brains get used in the service of achieving values that everyone in society gets equal input on. So you have a lot of power inequality but a lot of welfare equality. Maybe you could even address the self-actualization point by making that one of the values that people vote on somehow. (Also, it’s not clear to me that voting on values rather than politicians actually represents loss of freedom to master your destiny etc.)
This is also interesting.
If you’re willing to go back more than 70 years, in the US at least, the math suggests prepping is a good strategy:
+1 for this. It’s tremendously refreshing to see someone engage the opposing position on a controversial issue in good faith. I hope you don’t regret writing it.
Would your model predict that if we surveyed fans of *50 Shades of Grey*, they have experienced traumatic abuse at a rate higher than the baseline? This seems like a surprising but testable prediction.
Personally, I think your story might be accurate for your peer group, but that your peer group is also highly non-representative of the population at large. There is very wide variation in female sexual preferences. For example, the stupidslutsclub subreddit was created for women to celebrate their enjoyment of degrading and often dubiously consensual sex. The conversation there looks nothing like the conversation about sex in the rationalist community, because they are communities for very different kinds of people. When I read the stupidslutsclub subreddit, I don’t get the impression that the female posters are engaging in the sort of self-harm you describe. They’re just women with some weird kinks.
Most PUA advice is optimized for picking up neurotypical women who go clubbing every weekend. Women in the rationalist community are far more likely to spend Friday evening reading Tumblr than getting turnt.
We shouldn’t be surprised if there are a lot of mating behaviors that women in one group enjoy and women in the other group find disturbing.If I hire someone to commit a murder, I’m guilty of something bad. By creating an incentive for a bad thing to happen, I have caused a bad thing to happen, therefore I’m guilty. By the same logic, we could argue that if a woman systematically rejects non-abusive men in favor of abusive men, she is creating an incentive for men to be abusive, and is therefore guilty. (I’m not sure whether I agree with this argument. It’s not immediately compatible with the “different strokes for different folks” point from previous paragraphs. But if feminists made it, I would find it more plausible that their desire is to stop a dynamic they consider harmful, as opposed to engage in anti-male sectarianism.)
Another point: Your post doesn’t account for replaceability effects. If a woman is systematically rejecting non-abusive men in favor of abusive men, and a guy presents himself as someone who’s abusive enough to be attractive to her but less abusive than the average guy she would date, then you could argue that she gains utility through dating him. And if she has a kid, we’d probably like her to have a kid with someone who’s pretending to be a jerk than someone who actually is a jerk, since the kid only inherits jerk genes in the latter case. (BTW, I think the “systematically rejecting non-abusive men in favor of abusive men” is an extreme case that is probably quite rare/nonexistent in the population, but it’s simpler to think about.)
Once you account for replaceability, it could be that the most effective intervention for decreasing abuse is actually to help non-abusive guys be more attractive. If non-abusive guys are more attractive, some women who would have dated abusive guys will date them instead, so the volume of abuse will decrease. This could involve, for example, advice for how to be dominant in a sexy but non-abusive way.
This is sad.
Some of his old tweets are pretty dark:
I haven’t talked to anyone face to face since 2015
https://twitter.com/Grognor/status/868640995856068609
I just want to remind everyone that this thread exists.
The sentiment is the same, but mine has an actual justification behind it. Care to attack the justification?
Happy thought of the day: If the simulation argument is correct, and you find that you are not a p-zombie, it means some super civilization thinks you’re doing something important/interesting enough to expend the resources simulating you.
“I think therefore I am a player character.”
I just saw this link, maybe you have thoughts?
(Let’s move subsequent discussion over there)
This is a zero-sum game where every person working on x-risk is a technical person explicitly not working on advancing technologies (like AI) that will increase standards of living and help solve our global problems. If someone chooses to work on AI x-risk, they are probably qualified to work directly on the hard problems of AI itself. By not working on AI they are incrementally slowing down AI efforts, and therefore delaying access to technology that could save the world.
I wouldn’t worry much about this, because the financial incentives to advance AI are much stronger than the ones to work on AI safety. AI safety work is just a blip compared to AI advancement work.
So here’s a utilitarian calculation for you: assume that AGI will allow us to conquer disease and natural death, by virtue of the fact that true AGI removes scarcity of intellectual resources to work on these problems. It’s a bit of a naïve view, but I’m asking you to assume it only for the sake of argument. Then every moment someone is working on x-risk problems instead, they are potentially delaying the advent of true AGI by some number of minutes, hours, or days. Multiply that by the number of people who die unnecessary deaths every day—hundreds of thousands—and that is the amount of blood on the hands of someone who is capable but chooses not to work on making the technology widely available as quickly as possible. Existential risk can only be justified as a more pressing concern if can be reasonably demonstrated to have a higher probability of causing more deaths than inaction.
You should really read Astronomical Waste before you try to make this kind of quasi-utilitarian argument about x-risk :)
Show me the code. Demonstrate for me (in a toy but realistic environment) an AI/proto-AGI that turns evil, built using the architectures that are the current focus of research, and give me reasonable technical justification for why we should expect the same properties in larger, more complex environments.
What do you think of this example?
https://www.facebook.com/jesse.newton.37/posts/776177951574
(I’m sure there are better examples to be found, I’m just trying to figure out what you are looking for.)
I don’t like the precautionary principle either, but reversed stupidity is not intelligence.
“Do you think there’s a reason why we should privilege your position” was probably a bad question to ask because people can argue forever about which side “should” have the burden of proof without actually making progress resolving a disagreement. A statement like
The burden of proof therefore belongs to those who propose restrictive measures.
...is not one that we can demonstrate to be true or false through some experiment or deductive argument. When a bunch of transhumanists get together to talk about the precautionary principle, it’s unsurprising that they’ll come up with something that embeds the opposite set of values.
BTW, what specific restrictive measures do you see the AI safety folks proposing? From Scott Alexander’s AI Researchers on AI Risk:
The “skeptic” position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research.
The “believers”, meanwhile, insist that although we shouldn’t panic or start trying to ban AI research, we should probably get a couple of bright people to start working on preliminary aspects of the problem.
(Control-f ‘controversy’ in the essay to get more thoughts along the same lines)
Like Max More, I’m a transhumanist. But I’m also a utilitarian. If you are too, maybe we can have a productive discussion where we work from utilitarianism as a shared premise.
As a utilitarian, I find Nick Bostrom’s argument for existential risk minimization pretty compelling. Do you have thoughts?
Note Bostrom doesn’t necessarily think we should be biased towards slow tech progress:
...instead of thinking about sustainability as is commonly known, as this static concept that has a stable state that we should try to approximate, where we use up no more resources than are regenerated by the natural environment, we need, I think, to think about sustainability in dynamical terms, where instead of reaching a state, we try to enter and stay on a trajectory that is indefinitely sustainable in the sense that we can contain it to travel on that trajectory indefinitely and it leads in a good direction.
http://www.stafforini.com/blog/bostrom/
So speaking from a utilitarian perspective, I don’t see good reasons to have a strong pro-tech prior or a strong anti-tech prior. Tech has brought us both disease reduction and nuclear weapons.
Predicting the future is unsolved in the general case. Nevertheless, I agree with Max More that we should do the best we can, and in fact one of the most serious attempts I know of to forecast AI has come out of the AI safety community: http://aiimpacts.org/ Do you know of any comparable effort being made by people unconcerned with AI safety?
You describe the arguments of AI safety advocates as being handwavey and lacking rigor. Do you believe you have arguments for why AI safety should not be a concern that are more rigorous? If not, do you think there’s a reason why we should privilege your position?
Most of the arguments I’ve heard from you are arguments that AI is going to progress slowly. I haven’t heard arguments from AI safety advocates that AI will progress quickly, so I’m not sure there is a disagreement. I’ve heard arguments that AI may progress quickly, but a few anecdotes about instances of slow progress strike me as a pretty handwavey/non-rigorous response. I could just as easily provide anecdotes of unexpectedly quick progress (e.g. AIs able to beat humans at Go arrived ~10 years ahead of schedule). Note that the claim you are going for is a substantially stronger one than the one I hear from AI safety folks: you’re saying that we can be confident that things will play out in one particular way, and AI safety people say that we should be prepared for the possibility that things play out in a variety of different ways.
FWIW, I’m pretty sure Bostrom’s thinking on AI predates Less Wrong by quite a bit.
A lot more folks will be using it soon...
?
Nice post. I think one thing which can be described in this framework is a kind of “distributed circular reasoning”. The argument is made that “we know sharing evidence for Blue positions causes harmful effects due to Green positions A, B, and C”, but the widespread acceptance of Green positions A, B, and C itself rests on the fact that evidence for Green positions is shared much more readily than evidence for Blue positions.