I have been lurking around LW for a little over a year. I found it indirectly through the Simulation Argument > Bostrom > AI > MIRI > LW. I am a graduate of Yale Law School, and have an undergraduate degree in Economics and International Studies focusing on NGO work. I also read a lot, but in something of a wandering path that I realize can and should be improved upon with the help, resources, and advice of LW.
I have spent the last few years living and working in developing countries around the world in various public interest roles, trying to find opportunities to do high-impact work. This was based around a vague and undertheorized consequentialism that has been pretty substantially rethought after finding FHI/MIRI/EA/LW etc. Without knowing about the larger effective altruism movement (aside from vague familiarity with Singer, QALY cost effectiveness comparisons between NGOs, etc.) I had been trying to do something like effective altruism on my own. I had some success with this, but a lot of it was just the luck of being in the right place at the right time. I think that this stuff is important enough that I should be approaching it more systematically and strategically than I had been. In particular, I am spending a lot of time moving my altruism away from just the concrete present and into thinking about “astronomical waste” and the potential importance of securing the future for humanity. This is sort of difficult, as I have a lot of experiential “availability” from working on the ground in poor countries which pulls on my biases, especially when faced with a lot of abstraction as the only counterweight. However, as stated, I feel this is too important to do incorrectly, even if it means taming intuitions and the easily available answer.
I have also been spending a lot of time recently thinking about the second disjunct of the simulation argument. Unless I am making a fundamental mistake, it seems as though the second disjunct, by bringing in human decision making (or our coherent extrapolated volition, etc.) into the process, sort of indirectly entangles the probable metaphysical reality of our world with our own decision making. This is true as a sort of unfolding of evidence if you are a two-boxer, but it is potentially sort-of-causally true if you are a one-boxer. Meaning if we clear the existential hurdle, this is seemingly the next thing between us and the likely truth of being in a simulation. I actually have a very short write-up on this which I will post in the discussion area when I have sufficient karma (2 points, so probably soon…) I also have much longer notes on a lot of related stuff which I might turn into posts in the future if, after my first short post, this is interesting to anyone.
I am a bit shy online, so I might not post much, but I am trying to get bolder as part of a self-improvement scheme, so we will see how it goes. Either way, I will be reading.
Thank you LW for existing, and providing such rigorous and engaging content, for free, as a community.
I have been lurking around LW for a little over a year. I found it indirectly through the Simulation Argument > Bostrom > AI > MIRI > LW. I am a graduate of Yale Law School, and have an undergraduate degree in Economics and International Studies focusing on NGO work. I also read a lot, but in something of a wandering path that I realize can and should be improved upon with the help, resources, and advice of LW.
I have spent the last few years living and working in developing countries around the world in various public interest roles, trying to find opportunities to do high-impact work. This was based around a vague and undertheorized consequentialism that has been pretty substantially rethought after finding FHI/MIRI/EA/LW etc. Without knowing about the larger effective altruism movement (aside from vague familiarity with Singer, QALY cost effectiveness comparisons between NGOs, etc.) I had been trying to do something like effective altruism on my own. I had some success with this, but a lot of it was just the luck of being in the right place at the right time. I think that this stuff is important enough that I should be approaching it more systematically and strategically than I had been. In particular, I am spending a lot of time moving my altruism away from just the concrete present and into thinking about “astronomical waste” and the potential importance of securing the future for humanity. This is sort of difficult, as I have a lot of experiential “availability” from working on the ground in poor countries which pulls on my biases, especially when faced with a lot of abstraction as the only counterweight. However, as stated, I feel this is too important to do incorrectly, even if it means taming intuitions and the easily available answer.
I have also been spending a lot of time recently thinking about the second disjunct of the simulation argument. Unless I am making a fundamental mistake, it seems as though the second disjunct, by bringing in human decision making (or our coherent extrapolated volition, etc.) into the process, sort of indirectly entangles the probable metaphysical reality of our world with our own decision making. This is true as a sort of unfolding of evidence if you are a two-boxer, but it is potentially sort-of-causally true if you are a one-boxer. Meaning if we clear the existential hurdle, this is seemingly the next thing between us and the likely truth of being in a simulation. I actually have a very short write-up on this which I will post in the discussion area when I have sufficient karma (2 points, so probably soon…) I also have much longer notes on a lot of related stuff which I might turn into posts in the future if, after my first short post, this is interesting to anyone.
I am a bit shy online, so I might not post much, but I am trying to get bolder as part of a self-improvement scheme, so we will see how it goes. Either way, I will be reading.
Thank you LW for existing, and providing such rigorous and engaging content, for free, as a community.