Maybe God will strike us down just for thinking about building a Friendly AI.
When you argue that the expected utility of action X is negative, you won’t get much headway by proposing an unlikely and gerrymandered set of circumstances such that, conditional on them being true, the conditional expectation is negative.
Maybe God will strike us down just for thinking about building a Friendly AI.
When you argue that the expected utility of action X is negative, you won’t get much headway by proposing an unlikely and gerrymandered set of circumstances such that, conditional on them being true, the conditional expectation is negative.
Now what kind of civilized rational conversation is that?