What Vassar is saying sounds to me like a justification of Pascal’s Wager by arguing that some God’s have more measure than others and that therefore we can rationally decide to believe into a certain God and live accordingly.
That is like saying that a biased coin does not have a probability of 1⁄2 and that we can therefore maximize our payoff by betting on the side of the coin that is more likely to end up face-up. Which would be true if we had any other information other than that the coin is biased. But if we don’t have any reliable information except other than that it is biased, it makes no sense to deviate from the probability of a fair coin.
And I don’t think it is clear, at this point, that we are justified to assume more than that there might be risks from AI. Claiming that there are actions that we can take, with respect to risks from AI, that are superior to others, is like claiming that the coin is biased while being unable to determine the direction of the bias. By claiming that doing something is better than doing nothing we might as well end up making things worse. Just like by unconditionally assigning a higher probability to one side of a coin, of which we know nothing but that it is biased, in a coin tossing tournament.
The only sensible option seems to be to wait for more information.
Your posts highlight fundamental problems that I have as well. Especially this and this comment concisely describe the issues.
I have no answers and I don’t know how other people deal with it. Personally I forget about those problems frequently and act as if I can actually calculate what to do. Other times I just do what I want based on naive introspection.
And I don’t think it is clear, at this point, that we are justified to assume more than that there might be risks from AI. Claiming that there are actions that we can take, with respect to risks from AI, that are superior to others, is like claiming that the coin is biased while being unable to determine the direction of the bias. By claiming that doing something is better than doing nothing we might as well end up making things worse. Just like by unconditionally assigning a higher probability to one side of a coin, of which we know nothing but that it is biased, in a coin tossing tournament.
This is a problem—though it probably shouldn’t stop us from trying.
The only sensible option seems to be to wait for more information.
Players can try to improve their positions and attempt to gain knowledge and power. That itself might cause problems—but it seems likely to beat thumb twiddling.
I think Wei_Dai’s reply does trump that.
What Vassar is saying sounds to me like a justification of Pascal’s Wager by arguing that some God’s have more measure than others and that therefore we can rationally decide to believe into a certain God and live accordingly.
That is like saying that a biased coin does not have a probability of 1⁄2 and that we can therefore maximize our payoff by betting on the side of the coin that is more likely to end up face-up. Which would be true if we had any other information other than that the coin is biased. But if we don’t have any reliable information except other than that it is biased, it makes no sense to deviate from the probability of a fair coin.
And I don’t think it is clear, at this point, that we are justified to assume more than that there might be risks from AI. Claiming that there are actions that we can take, with respect to risks from AI, that are superior to others, is like claiming that the coin is biased while being unable to determine the direction of the bias. By claiming that doing something is better than doing nothing we might as well end up making things worse. Just like by unconditionally assigning a higher probability to one side of a coin, of which we know nothing but that it is biased, in a coin tossing tournament.
The only sensible option seems to be to wait for more information.
This is one of The Big Three Problems I came to LW hoping to find a solution for, but have mainly noticed that nobody wants to talk about it. Oh well.
Now I am curious about the other two.
How do you judge what you should (value-judgmentally) value?
How do you deal with uncertainty about the future (unpredictable chains of causality)? (what your above post was about)
What’s the right thing to do in life?
Here are some of my previous posts on the topics.
Your posts highlight fundamental problems that I have as well. Especially this and this comment concisely describe the issues.
I have no answers and I don’t know how other people deal with it. Personally I forget about those problems frequently and act as if I can actually calculate what to do. Other times I just do what I want based on naive introspection.
This is a problem—though it probably shouldn’t stop us from trying.
Players can try to improve their positions and attempt to gain knowledge and power. That itself might cause problems—but it seems likely to beat thumb twiddling.