It took me a while to fully understand your point in this post. I think that adding a obviously wrong example that’s identical in structure to “All men are mortal. Socrates is a man. Therefore Socrates is mortal.”, will help. My example is “All chickens are mortal. Socrates is a chicken. Therefore, Socrates is mortal.” It’ll help show that the original example given in the post is wrong.
Bohaska
[Question] Why did Russia invade Ukraine?
We need more epistemic spot checks like these for important claims made in other posts
Nitpick: Italy as a headline appears twice
If you don’t want to eat your own tasty pastries due to future regrets, I’m willing to volunteer to help you eat them for free.
[Question] High school advice
The three historical figures I can think of who built giant institutions lasting thousands of years are Paul the Apostle, Mohammad and Qin Shihuang.
I will not exactly classify Qin Shihuang as in that vein. While the idea of the Mandate of Heaven and the idea that China should be unified into one dynasty has been fully established by him (Almost all rebellions in Chinese history were about overthrowing the emperor and replacing it with a new one, but only rarely about changing their government structure), the Qin dynasty collapsed with his son. Qin is not exactly known for being a long-lasting nation.
I believe Confucius is a much better example. His philosophy and teachings have been passed down all the way to today’s China, and has held its importance for thousands of years.
I wonder, what percentage of users vote based on post quality, and what percentage vote based on the viewpoint of the post?
What was the result of your request for further communication outside of LessWrong?
Mind if you can write a follow-up review about how you joined the rationalist/EA community? Interested to see how your journey progressed 🙂
What would it mean for an AI to be right or wrong about morality? Isn’t morality defined by us? How would you define morality?
How would we be able to verify such a claim? How would we investigate this? What specific help do you need from us?
How would you define objective morality? What would make it objective? If it did exist, how would you possibly be able to find it?
Isn’t morality a human construct? Eliezer’s point is that morality defined by us, not an algorithm or a rule or something similar. If it was defined by something else, it wouldn’t be our morality.
Eliezer used “universally compelling argument” to illustrate a hypothetical argument that could persuade anything, even a paper clip maximiser. He didn’t use it to refer to your definition of the word.
You can say that the fact it doesn’t persuade a paper clip maximiser is irrelevant, but that has no bearing on the definition of the word as commonly used in LessWrong.
So, most people see sleep as something that’s obviously beneficial, but this post was great at sparking conversation about this topic, and questioning that assumption about whether sleep is good. It’s well-researched and addresses many of the pro-sleep studies and points regarding the issue.
I’ll like to see people do more studies on the effects of low sleep on other diseases or activities. There’s many good objections in the comments, such as increased risk of Alzheimer’s, driving while sleepy and how the analogy of sleep deprivation to fasting may be misguided.
There was a good experiment presented here, where Andrew Vlahos replied
> I’m a tutor, and I’ve noticed that when students get less sleep they make many more minor mistakes (like dropping a negative sign) and don’t learn as well. This effect is strong enough that for a couple of students I started guessing how much sleep they got the last couple days at the end of sessions, asked them, and was almost always right.
and guzey replied with a proposed experiment
> As an experiment—you can ask a couple of your students to take a coffee heading to you when they are underslept and see if they continue to make mistakes and learn poorly (in which case it’s the lack of sleep per se likely causing problems) or not (in which case it’s sleepiness)Hopefully someone does bother to do it in the future.
I was initially a bit confused over the difference between an AI based on shard theory and one based on an optimiser and a grader, until I realized that the former has an incentive to make its evaluation of results as accurate as possible, while the latter doesn’t. Like, the diamond shard agent wouldn’t try to fool its grader because it’ll conflict with its goal to have more diamonds, whereas the latter agent wouldn’t care.
I don’t think so, I also only noticed it on the frontpage today.
Would more people donate to charity if they could do so in one click? Maybe...
Actually, China has got vaccines and boosters in a lot of arms, vaccination rate 85%