I concur, with many other people that when you start of from a wide sample of aggregative consequentialist values and try to do the most good, you bump into AI pretty soon. As I told Stuart Russell a while ago to explain why a Philosopher Anthropologist was auditing his course:
My PHD will likely be a book on altruism, and any respectable altruist these days is worried about AI at least 30% of his waking life.
That’s how I see it anyway. Most of the arguments for it are in “Superintelligence” if you disagree with that, then you probably do disagree with me.
It’s actually fairly common in EA circles by now to acknowledge AI as an issue. The disagreements tend to be more about whether there are useful things to be done about it, or whether there are specific nonprofits worth supporting. (Givewell has a blogpost in that direction)
EA is an intensional movement.
http://effective-altruism.com/ea/j7/effective_altruism_as_an_intensional_movement/
I concur, with many other people that when you start of from a wide sample of aggregative consequentialist values and try to do the most good, you bump into AI pretty soon. As I told Stuart Russell a while ago to explain why a Philosopher Anthropologist was auditing his course:
That’s how I see it anyway. Most of the arguments for it are in “Superintelligence” if you disagree with that, then you probably do disagree with me.
Not particularly disagreeing, I just found it odd in comparison to other EA writings. Thanks for the clarification.
It’s actually fairly common in EA circles by now to acknowledge AI as an issue. The disagreements tend to be more about whether there are useful things to be done about it, or whether there are specific nonprofits worth supporting. (Givewell has a blogpost in that direction)