Check Yudkovsky’s other writings (especially fiction) for multiple detailed discussions of these topics.
The simplest way to get rid of social injustice is to get rid of society. Most people would think that not an acceptable cost?
Check Yudkovsky’s other writings (especially fiction) for multiple detailed discussions of these topics.
The simplest way to get rid of social injustice is to get rid of society. Most people would think that not an acceptable cost?
Nit pick of your nit pick is that he doesn’t propose that it would include using nuclear weapons, he proposes that it would include the will to use nuclear weapons. The point is not “lets start throwing nukes around” it’s “this threat is larger than nuclear war—so nukes should also be an option if nothing else will work”.
The thing is that in her world model, she did see the Holy Spirit. That’s how she understands “seeing the Holy Spirit”. It very much is a valid representation of her beliefs. Whether her model is correct (or even good) is a different matter. Christianity has a bunch of jargon, as any such group with a history of thought does (I yesterday had a bit of a confusing discussion before I realised that the meaning of “world model” is not obvious). It seems fair to say that it’s worth first establishing that all phrases being used are understood in the same way by everyone, but that also requires noticing when a given phrase is non obvious or even counter-intuitive.
The entity with the intention to deceive doesn’t have to be the same as the entity misrepresenting its (which?) beliefs, true. But in that case it’s better to say that “Christianity is acting in bad faith”, rather than “this Christian over here is acting in bad faith”. Saying that someone is acting in bad faith is a statement about their intentions, not their actions.
You seem to have a somewhat idiosyncratic interpretation of “lying’ (or I do—might be a cultural thing). My understanding of lying is “saying something you know to be untrue”. Which sort of is intentionally misrepresenting what they believe? Whether they do it instrumentally or terminally is besides the point?
“Accidentally bad faithing” doesn’t make sense (hence the “I like people who obey the law (here meaning never committing a social faux pas)” example). If you misrepresent the truth—or even outright lie—but but didn’t intend to, then that’s not bad faith. Bad faith is when you intentionally set out to deceive someone. The intentions here are important. It’s like the difference between manslaughter and cold blooded planned murder. Both result in a corpse, but the later set out to intentionally kill someone.
The Christian talking about the witness of the Holy Spirit is not setting out to deceive you. They truly believe in their position. It might not be true, it might leave you with a totally different understanding than they’re thinking of, they might even be aware that they’re using “witness” in a somewhat unusual manner (“open your heart to Jesus”...) etc., but it’s not explicitly intended to deceive you, and so is not in bad faith.
“Cynical” seems to often be used as an incorrect synonym for what bad faith is pointing at, e.g. “cynically mention the witness of the Holy Spirit as evidence”.
Why assume this is only words? AI systems can already generate images, video and sound, which gives you a lot more subconscious bandwidth (scents and tactile feedback are admittedly harder). There’s also the shortcut of finding a charismatic/convincing person and convincing them to help, which may or may not make things a lot easier.
It seems like being persuasive is mainly working out what the entity being persuaded wants to get and wants to avoid. Once you can work that out (basically—something like cognitive empathy, including how psychopaths work), you then just have to select arguments that suggest that they are more likely to get what they want (and less likely to get what they don’t want) if they agree with you.
If you can simulate another entity with high fidelity, then you can just run a bunch of different arguments by them and see which ones tend to work better. This transforms it into an optimisation problem. Or you can even do it the other way round and map which arguments work on which bins of people.
You might need to add sources for this. I roll to disbelieve, and Claude seems to also think that while you can construct a narrative that might be technically true, it would be at best misleading.
“Here are 5 bad interactions with this person” can be interpreted as “we don’t get along well”. “Here are 5 different people who had bad interactions with this person” is “that person doesn’t get along well with others”. This post is showing that certain people have patterns of harming others, as opposed to just having harmed the author.
where almost everyone seemed like a sexual abuser
There is something to the whole “my boyfriend will punch you in the face if you try anything funny” threat. I’m glad I don’t have to worry about violence nowadays, but it’s much harder to introduce credible deterrence mechanisms which aren’t backed by imminent physical pain.
I would like the world to be saved and think being good and not being evil
There’s also the whole thing about having something worthwhile saving. Winning all battles but losing the war is a very sad way to end.
Which is simply not true, according to my experience. You can. And even do it immediately
This is quite dependent on the person. I know people who can do so easily, and I know people who can’t at all. It might be something you can learn with practice, but until you can do it, you can’t.
Beware the typical mind fallacy.
I guess it’s mainly too long, and therefore unclear? There are at least 4 main points that you are addressing in one large comment, along with a bunch of smaller issues. Splitting it into multiple, targeted ones would make it easier to react to them—it would also make it easier for you to work out what people don’t like about it.
I think you’re making a good point, but it could be boiled down to 2-3 sentences
I’m not sure that modeling people as rational agents in this kind of situation is correct. I’d assume that for every 5 people who know, there are 5 who are certain they know but are incorrect, 5 who have no idea but sound authoritative and another 20 who heard something from someone and are pretty sure it was over there, maybe? It should sort itself out after a while, but depending on the circumstances the sooner you have accurate information, the better.
The ideal approach, of course, is to just ask Claude to come up with some example situations and then research where to go (with backups) - spend 10min on it once every now and then, to make sure you’re up to date, and just have the places marked somewhere.
I notice I’m confused now. Manifest Destiny makes sense in the context of this post—there’s something of value to be achieved, and there will be costs. I’m not sure if I agree with this, but it’s coherent. What I don’t understand is how egregores using people via their personal incentives (for lack of a better description) fits in? It would seem that people just being people and things happening is sort of the opposite (or at least orthogonal) to actively trying to make things better? Do you mean something about shaping incentives being the method of conquest? This seems obviously true (capitalism vs communism being an good example), but if so, then using colonialism as an example might be a bad choice, or at least would need more inference steps explained.
This seems unfair or at least simplified? The Mongols didn’t come close to clearing three continents, but that was a skill issue. In absolute numbers or geographical extent you can make the argument that Europe was very successful at expansion, but this isn’t a specifically European hobby—this is what humanity has been doing as far back as can be seen. Europe was very good at it because they had a decisive edge (guns and disease, mainly). Previous attempts stopped earlier for technological reasons (hard to hold an empire if it takes months to communicate with the provinces). Most of history is different cultures trying to do the same thing, with varying levels of success and brutality. The Yamnaya expansion had similar results, but without the smallpox, which suggests that if anything it was worse, because intentional.
To be clear, I’m not saying that colonialism was good. More something like “European colonialism was the largest in absolute numbers instance of a recurring human pattern” or something? That most high culture is based on enormous suffering and exploitation? British colonialism at least pretended at trying to help the natives. They also stopped the slave trade at large cost—this doesn’t absolve them of anything, of course, but I can’t imagine e.g. the Aztecs of even dreaming of such absurdities.
Worst recent, maybe. You can make a more generic statement about “wars of conquest and empire building” being the worst atrocity in human history, which would sort of include colonialism, but e.g. I’m pretty sure the Assyrians were a lot more atrocious than the United States. Or the Mongols for a more recent such group.
That being said, “nobody should defend it” is very harsh. Why shouldn’t they? You can show that colonialism was (is) bad, but not let people try to vouch for it seems unfair? I’m pretty sure you have views which many people think noone should defend (pretty much everyone does, somewhere) - does that mean you should abandon them?
Seems bad to focus on optics rather than truth
Not really. You have to also take into account the goodness being fought for. Evolution doesn’t care either way. Might makes right and all that. From what I understand the OP is pointing more in the direction of an argument from consequences, where the outcome was good, and so the price was worth paying (not that the cost was good! That’s a different matter!). The colonizers had a vision (this part seems very shaky, as the “vision” was very different at different point in time), that vision was good, they fought to achieve it, the price was very high, but the results justify the cost.
It’s possible that the future AI that takes over will result in a better state than the current one (the whole glorious trans-humanist future and everything). In which case I can totally understand someone wanting to fight for that to occur. I can also totally understand the natives fighting to keep their current way of life, which while not perfect is not bad. I’d even go so far as to say that the OP might even support this. They’d be fighting for their vision of goodness.
Either way, the point is to work out what “goodness” is and fight for it, knowing full well that there will be bad/ugly/maybe evil things happening along the way. The ends do not justify the means. Allies should be held accountable. There will be bad apples. This doesn’t mean you stop fighting. You try to limit the damage. But there will be damages.
Light from your windows depends on time of day/year etc. It also assumes you’ll be looking at/for things in places where that light reaches. I doubt this would be likely, and if so you’d have bigger problems, but I’m guessing a cloud of volcanic ash would massively limit the available light?
A little headband type light can last for weeks if you’re careful. A phone will probably die after a day or two. Probably fine, but it limits your options. I don’t really use my phone for anything, so I’m biased.
Growing up we used to often get power cuts (e.g. the neighbors would steal the power lines for copper...). So we’d always have candles and matches in an easily accessible place. In summer this was mainly used for going to the toilet (often can have small or no windows) or into cellars/pantries. In winter this meant that you could still do things after 3pm.
A portable light is very useful if you have to fix things (like sinks, cupboards etc.), as those places tend to not have good lighting.
A knife can be used to cut things, which is the obvious usage, but also can be used as a screwdriver, a level, to open cans, open bottles, pry things out, etc. If I had to choose one thing to have with me in an unspecified emergency, I’d want a sharp knife, as you can use it to bootstrap basic versions of most of the other tools
Wait, I’m confused. What is the difference between not admitting to themselves and not knowing? Do you mean something like subconsciously knowing? Or maybe cognitive dissonance?