Maybe it’s because people perceive me as an Optimist and therefore my points must be combated at any cost.
Maybe people really just naturally and unbiasedly disagree this much, though I doubt it.
But the end result is that I have given up on communicating with most folk who have been in the community longer than, say, 3 years.
Not saying that it’s fun or even obviously net-positive for all participants, but I think combative communication is better than no communication, as far as truth-seeking goes.
To be frank, I think a lot of the case for AI accident risk comes down to a set of subtle word games.
Sure, but what if what’s left is risky enough? Maybe utility maximization is a bad model of future AI (maybe because it’s hard to predict the technology that doesn’t exist yet) - but what’s the alternative? Isn’t labelling some empirical graph that ignores warning signs “awesomeness” and extrapolating is more of a word game?
Not saying that it’s fun or even obviously net-positive for all participants, but I think combative communication is better than no communication, as far as truth-seeking goes.
Sure, but what if what’s left is risky enough? Maybe utility maximization is a bad model of future AI (maybe because it’s hard to predict the technology that doesn’t exist yet) - but what’s the alternative? Isn’t labelling some empirical graph that ignores warning signs “awesomeness” and extrapolating is more of a word game?