The survey has been taken by me.
MTGandP
You can’t distinguish your group by doing things that are rational and believing things that are true. If you want to set yourself apart from other people you have to do things that are arbitrary and believe things that are false.
Done. I think a lot of these questions are really fascinating, including user-submitted questions. I’m especially interested to see if we can do any better at avoiding anchoring than the general public.
I think it’s fairly clear that a farm chicken’s life is well below that threshold. If I had the choice between losing consciousness for an hour or spending an hour as a chicken on a factory farm, I would definitely choose the former.
Ninja Edit: I think a lot of people have poor intuitions when comparing life to non-life because our brains are wired to strongly shy away from non-life. That’s why the example I gave above used temporary loss of consciousness rather than death. Even if you don’t buy the above example, I think it’s possible to see that factory-farmed life is worse than death. This article discussed how doctors—the people most familiar with medical treatment—frequently choose to die sooner rather than attempt to prolong their lives when they know they will suffer greatly in their last days. It seems that life on a factory farm would entail much more suffering than death by a common illness.
- 24 Jul 2013 20:04 UTC; 1 point) 's comment on Why Eat Less Meat? by (
The question “How Long Since You Last Posted On LessWrong?” is ambiguous—I don’t know if posting includes comments or just top-level posts.
I know more about StarCraft than I do about AI, so I could be off base, but here’s my best attempt at an explanation:
As a human, you can understand that a factory gets in the way of a unit, and if you lift it, it will no longer be in the way. The AI doesn’t understand this. The AI learns by playing through scenarios millions of times and learning that on average, in scenarios like this one, it gets an advantage when it performs this action. The AI has a much easier time learning something like “I should make a marine” (which it perceives as a single action) than “I should place my buildings such that all my units can get out of my base”, which requires making a series of correct choices about where to place buildings when the conceivable space of building placement has thousands of options.
You could see this more broadly in the Terran AI where it knows the general concept of putting buildings in front of its base (which it probably learned via imitation learning from watching human games), but it doesn’t actually understand why it should be doing that, so it does a bad job. For example, in this game , you can see that the AI has learned:
1. I should build supply depots in front of my base.
2. If I get attacked, I should raise the supply depots.
But it doesn’t actually understand the reasoning behind these two things, which is that raising the supply depots is supposed to prevent the enemy units from running into your base. So this results in a comical situation where the AI doesn’t actually have a proper wall, allowing the enemy units to run in, and then it raises the supply depots after they’ve already run in. In short, it learns what actions are correlated with winning games, but it doesn’t know why, so it doesn’t always use these actions in the right ways.
Why is this AI still able to beat strong players? I think the main reason is because it’s so good at making the right units at the right times without missing a beat. Unlike humans, it never forgets to build units or gets distracted. Because it’s so good at execution, it can afford to do dumb stuff like accidentally trapping its own units. I suspect that if you gave a pro player the chance to play against AlphaStar 100 times in a row, they would eventually figure out a way to trick the AI into making game-losing mistakes over and over. (Pro player TLO said that he practiced against AlphaStar many times while it was in development, but he didn’t say much about how the games went.)
I agree. In addition, I think people who claim that they will eat more meat after seeing a pamphlet or some other promotion for vegetarianism just feel some anger in the moment, but they’ll likely forget about it within an hour or so. I can’t see someone several weeks later saying to eirself, “I’d better eat extra meat today because of that pamphlet I read three weeks ago.”
You make a quick statement at the end about how Kurzweil does better than random chance. But I wonder how we’d assess that? I’d guess that, if he’s getting 50% correct or weakly correct, he’s doing better than random chance because many (most?) of his claims are far-fetched.
I’ve thought of a way to test this, although it will take another ten years:
Kurzweil makes a bunch of predictions about what will happen by 2023. Then you have a bunch of non-experts decide which of his predictions they agree with. After 10 years, we can measure how much better Kurzweil did than the non-experts.
I answered that I’m cis by default, but I would freak out if I woke up in a woman’s body.
I think it’s totally reasonable to consider that freaky for reasons other than that you now have to live as a woman. I think the spirit of the question was more, “If you were a woman but had the same personality, would you be okay with that?”
Most people do worse at calibration than they expect, but you can improve with practice. http://predictionbook.com/
Your points (1) and (2) seem like fully general counterarguments against any activity at all, other than the single most effective activity at any given time. I do agree with you that future suffering could potentially greatly outweigh present suffering, and I think it’s very important to try to prevent future suffering of non-human animals. However, it seems that one of the best ways to do that is to encourage others to care more for the welfare of non-human animals, i.e. become veg*ans.
Perhaps more importantly, it makes sense from a psychological perspective to become a veg*an if you care about non-human animals. It seems that if I ate meat, cognitive dissonance would make it much harder for me to make an effort to prevent non-human suffering on a broader scale.
(4): Although I see no way to falsify this belief, I also don’t see any reason to believe that it’s true. Furthermore, it runs counter to my intuitions. Are profoundly mentally disabled humans incapable of “true” suffering?
(5): Humans and non-human animals evolved in the same way, so it strikes me as highly implausible that humans would be capable of suffering while all non-humans would lack this capacity.
I think that calculation makes sense and the −36 number looks about right. I had actually done a similar calculation a while ago and came up with a similar number. I suppose my guess of −10,000 was too hasty.
It may actually be a good deal higher than 36 depending on how much suffering fish and shellfish go through. This is harder to say because I don’t understand the conditions in fish farms nearly as well as chicken farms.
What if 2 + 2 varies over something other than time that nonetheless correlates with time in our universe? Suppose 2 + 2 comes out to 4 the first 1 trillion times the operation is performed by humans, and to 5 on the 1 trillion and first time.
I suppose you could raise the same explanation: the definition of 2 + 2 makes no reference to how many times it has been applied. I believe the same can be said for any other reason you may give for why 2 + 2 might cease to equal 4.
GWWC in particular does not recommend any animal welfare charities, which makes me especially reluctant to donate to them or even support them at all. It seems much too specifically focused on global poverty. From the GWWC homepage:
Extreme poverty causes much of the world’s worst suffering, but when armed with the right information you can make an enormous difference.
This seems excessively limiting given that good animal welfare charities are orders of magnitude more efficient than even the best human charities; and it becomes especially concerning when we consider the poor meat-eater problem.
Effective Animal Activism is a meta-charity that evaluates animal welfare charities. They do not accept donations and instead recommend that you give directly to their top charities.
A different paper but in the same vein: Markets are efficient if and only if P= NP
In case anyone’s curious, here are the highest-grossing films, adjusted for inflation.
I think the point of public speaking classes isn’t to do networking, but to improve communication skills and therefore skill at networking.
Why do Objectivists so frequently believe that anthropogenic global warming is not real? (It appears to be the consensus opinion on the Objectivism forum.) This belief doesn’t seem to have anything to do with Objectivism, and Ayn Rand certainly never mentioned global warming.
Survey complete!