I want the world to be saved, and am willing to take action to make that happen so long as the actions I take to make it happen don’t make me feel like a victim. I tend to feel like a victim if I take an action that reduces my standard of living, I contribute to a lost cause, or a few other scenarios that don’t seem relevant here.
I presently feel that SIAI is blocking itself by apparently believing that solving the FAI problem is blocked on any or all of the following:
Newcomb’s problem
Dealing with people who have non-instrumental concerns about what is done with simulations of them, beyond saying “don’t care about that”
Caring what happens to causally disconnected areas of space-time that resemble the here-and-now
Caring about ethical systems that have unbounded utility, beyond saying “don’t make an ethical system with unbounded utility”
Probably a few other pieces of obscure philosophy I can’t recall right now or don’t know about yet
I have not yet posted coherent arguments against these things. I plan to spend some time on that for a while, since the people here claim to be responsive to good arguments. I don’t really expect changing SIAI’s position on enough of these issues to be politically possible, so I expect to fail and then focus my efforts elsewhere. Hmm, I suppose I should try to find and link up with any non-SIAI people on Giles’ list above at that point.
I suppose the general lesson to learn from this is that in at least one case, lack of agreement on a general approach is blocking cooperation.
In the past I’ve made the opposite argument to SIAI, which seemed to be well received, that there were more philosophical problems that need to be solved for FAI than they may have realized. Obviously it would be great news if that turns out not to be the case, so I would be really interested to hear your arguments.
I want the world to be saved, and am willing to take action to make that happen so long as the actions I take to make it happen don’t make me feel like a victim. I tend to feel like a victim if I take an action that reduces my standard of living, I contribute to a lost cause, or a few other scenarios that don’t seem relevant here.
I presently feel that SIAI is blocking itself by apparently believing that solving the FAI problem is blocked on any or all of the following:
Newcomb’s problem
Dealing with people who have non-instrumental concerns about what is done with simulations of them, beyond saying “don’t care about that”
Caring what happens to causally disconnected areas of space-time that resemble the here-and-now
Caring about ethical systems that have unbounded utility, beyond saying “don’t make an ethical system with unbounded utility”
Probably a few other pieces of obscure philosophy I can’t recall right now or don’t know about yet
I have not yet posted coherent arguments against these things. I plan to spend some time on that for a while, since the people here claim to be responsive to good arguments. I don’t really expect changing SIAI’s position on enough of these issues to be politically possible, so I expect to fail and then focus my efforts elsewhere. Hmm, I suppose I should try to find and link up with any non-SIAI people on Giles’ list above at that point.
I suppose the general lesson to learn from this is that in at least one case, lack of agreement on a general approach is blocking cooperation.
In the past I’ve made the opposite argument to SIAI, which seemed to be well received, that there were more philosophical problems that need to be solved for FAI than they may have realized. Obviously it would be great news if that turns out not to be the case, so I would be really interested to hear your arguments.
I thought SIAI consensus was that Newcomb’s problem was solved, and not a block at all?
It’s not so much that they feel they have to deal with those people as that they are those people.
(Haven’t read further yet.)