Are there any similar versions of this post on LW which express the same message, but without the patronising tone of Valentine? Would that be valuable?
Bohaska
I wonder, what percentage of users vote based on post quality, and what percentage vote based on the viewpoint of the post?
The three historical figures I can think of who built giant institutions lasting thousands of years are Paul the Apostle, Mohammad and Qin Shihuang.
I will not exactly classify Qin Shihuang as in that vein. While the idea of the Mandate of Heaven and the idea that China should be unified into one dynasty has been fully established by him (Almost all rebellions in Chinese history were about overthrowing the emperor and replacing it with a new one, but only rarely about changing their government structure), the Qin dynasty collapsed with his son. Qin is not exactly known for being a long-lasting nation.
I believe Confucius is a much better example. His philosophy and teachings have been passed down all the way to today’s China, and has held its importance for thousands of years.
We need more epistemic spot checks like these for important claims made in other posts
After reading this and your dialogue with Isusr, it seems that Dark Arts arguments are logically consistent and that the most effective way to rebut them is not to challenge them directly in the issue.
jimmy and madasario in the comments asked for a way to detect stupid arguments. My current answer to that is “take the argument to its logical conclusion, check whether the argument’s conclusion accurately predicts reality, and if it doesn’t, it’s probably wrong”
For example, you mentioned before an argument which says that we need to send U.S. troops to the Arctic because Russia has hypersonic missiles that can do a first-strike on the US, but their range is too short to attack the US from the Russian mainland, but it is long enough to attack the US from the Arctic.
If this really were true, we would see this being treated as a national emergency, and the US taking swift action to stop Russia from placing missiles in the Arctic, but we don’t see this.
Now, for some arguments (e.g. AI risk, cryonics), the truth is more complicated than this, but it’s a good heuristic for telling whether you need to investigate an argument more thoroughly or not.
Ethical worth may not be finite, but resources are finite. If we value ants more, then that means we should give more resources to ants, which means that there are less resources to give to humans.
From your comments on how you value reducing ant suffering, I think your framework regarding ants seems to be “don’t harm them, but you don’t need to help them either”. So basically reducing suffering but not maximising happiness.
Utilitarianism says that you should also value the happiness of all beings with subjective experience, and that we should try to make them happier , which leads to the question of how to do this if we value animals. I’m a bit confused, how can you value not intentionally making them suffer, but not also conclude that we should give resources to them to make them happier?
The reason why it’s considered good to double the ant population is not necessarily because it’ll be good for the existing ants, it’s because it’ll be good for the new ants created. Likewise, the reason why it’ll be good to create copies of yourself is not because you will be happy, but because your copies will be happy, which is also a good thing.
Yes, it requires the ants to have subjective experience for making more of them to be good in utilitarianism, because utilitarianism only values subjective experiences. Though, if your model of the world says that ant suffering is bad, then doesn’t that imply that you believe ants have subjective experience?
I think you just got the wrong audience. People assume that you’re referring to effective altruism charities and aid. The average LessWrong reader already believes that traditional aid is ineffective, this post is mostly old info. Your criticisms of aid sound a bit ignorant because people pattern-match your post to criticism of charities like GiveDirectly, when people have done studies that show GiveDirectly has quite a good cost-benefit ratio.
Your post is accurate, but redundant to EAs.
Also, slightly unrelated, but what do you think about EA charities? Have you looked into them? Do you find them better than traditional charities?
Would more people donate to charity if they could do so in one click? Maybe...
I don’t think so, I also only noticed it on the frontpage today.
What was the result of your request for further communication outside of LessWrong?
Actually, China has got vaccines and boosters in a lot of arms, vaccination rate 85%
Is the Renaissance caused by the new elite class, the merchants, focusing more on pleasure and having fun compared to the lords, who focused more on status and power?
hmm, is there a collection of the history of terrorist attacks related to AI?
We do agree that suffering is bad, and that if a new clone of you would experience more suffering than happiness, then it’ll be bad, but does the suffering really outweigh the happiness they’ll gain?
You have experienced suffering in your life. But still, do you prefer to have lived, or do you prefer to not have been born? Your copy will probably give the same answer.
(If your answer is genuinely “I wish I wasn’t born”, then I can understand not wanting to have copies of yourself)
I do believe your main point is correct, just that most people here already know that.
Why would our CoffeeFetcher-1000 stay in the building and continue to fetch us coffee? Why wouldn’t it instead leave, after (for example) writing a letter of resignation pointing out that there are staving children in Africa who don’t even have clean drinking water, let alone coffee, so it’s going to hitchhike/earn its way there, where it can do the most good [or substitute whatever other activity it could do that would do the most good for humanity: fetching coffee at a hospital, maybe].
Why can’t you just build an AI whose goal is to fetch its owners coffee, and not to maximize the good it’ll do?
I was initially a bit confused over the difference between an AI based on shard theory and one based on an optimiser and a grader, until I realized that the former has an incentive to make its evaluation of results as accurate as possible, while the latter doesn’t. Like, the diamond shard agent wouldn’t try to fool its grader because it’ll conflict with its goal to have more diamonds, whereas the latter agent wouldn’t care.
So, most people see sleep as something that’s obviously beneficial, but this post was great at sparking conversation about this topic, and questioning that assumption about whether sleep is good. It’s well-researched and addresses many of the pro-sleep studies and points regarding the issue.
I’ll like to see people do more studies on the effects of low sleep on other diseases or activities. There’s many good objections in the comments, such as increased risk of Alzheimer’s, driving while sleepy and how the analogy of sleep deprivation to fasting may be misguided.
There was a good experiment presented here, where Andrew Vlahos replied
> I’m a tutor, and I’ve noticed that when students get less sleep they make many more minor mistakes (like dropping a negative sign) and don’t learn as well. This effect is strong enough that for a couple of students I started guessing how much sleep they got the last couple days at the end of sessions, asked them, and was almost always right.
and guzey replied with a proposed experiment
> As an experiment—you can ask a couple of your students to take a coffee heading to you when they are underslept and see if they continue to make mistakes and learn poorly (in which case it’s the lack of sleep per se likely causing problems) or not (in which case it’s sleepiness)Hopefully someone does bother to do it in the future.
But Manifold adds 20 mana to liquidity per new trader, so it’ll eventually become more inelastic over time. The liquidity doesn’t stay at 50 mana.