Hi all,
I’ve been following EY and LW for about four years now. I’m fairly new to posting though. I started out as a “republican” in elementary school, then turned into a “libertarian” in high school because I didn’t care for many conservative positions. Then an “objectivist” in college, because I didn’t care for the fact that libertarianism only extended to politics and not ethics. Then I became frustrated with the Objectivist community and their inability to adapt to the real world so I became a “all the people I’ve met who self-identify as one of these labels has turned out to be really obnoxious so I really don’t want to convolute discussions by using a label”-ist. It wasn’t until recently that I discovered Rationalism and so far it has been the most accurate label and also the most complete system so far.
My end-game is to end death (and if entropically possible, reverse it). Which is a pretty big practical problem. As such, I don’t have a ton of interest in many of the ethical questions because more often than not, my answer is: “If we can end or reverse death, it doesn’t matter.” .Short-term, my goal is to become rich enough to retire fairly early and have a significant amount of money that can be used to fund various worthy causes and allow me to continue this path full-time. I’m probably 75% of the way there. When I’m not trying to build wealth, most of my free time is spent tinkering with various AI algorithms, exploring number theory, or building prototypes of various gadgets (my latest one is a hard drive that stores data using energy rather than matter. Nevermind the fact that it can only store about 16 bytes.).
I find this method to be intellectually dangerous.
We do not live in the LCPW, and constantly considering ethical problems as if we do is a mind-killer. It trains the brain to stop looking for creative solutions to intractable real world problems and instead focus on rigid abstract solutions to conceptual problems.
I agree that there is a small modicum of value to considering the LCPW. Just like there’s a small modicum of value to eating a pound of butter for dinner. It’s just, there are a lot better ways to spend ones time. The proper response to “Well, what about the LPCW?” is, “How do you know we are in the LCPW?” I think there is far more value in having a conversation that explores our assumptions about a difficult problem rather than indulges them.
Q: Consider the Sick Villager problem. How do we know that the patients won’t die due to transplant rejection? A: Oh, well, Omega says so. Q: Okay. So, how do we know Omega is right? A: Because Omega is omniscient. Q: If Omega is omniscient, why can’t it tell us how to grow working organs without need for human sacrifice? A: Because there are limits to how much it knows. Q: Ok, so if I knew in advance that Omega is omniscient but has these limitations, why on earth am I working in a village helping 10 villagers instead of working on advancing Omega to the point where it doesnt have those limitations. (And if I don’t know this in advance, why would I suddenly start believing some random computer that claims it is Omega?) A: I don’t know, because it’s the LCPW.
That conversation yields a lot more intellectual value; it trains you to think creatively and explore all possible solutions, rather than devise a single heuristic that is only applicable in a 5d-corner case. As I indicated above, it can actually be dangerous because novice rationalists may feel compelled to apply that narrow heuristic to situations where a more optimal, creative solution is present.