Oh please. There’s a difference between what makes a useful heuristic for you to decide what to spend time considering and what makes for a persuasive argument in a large debate where participants are willing to spend time hashing out specifics.
DH1. Ad Hominem. An ad hominem attack is not quite as weak as mere name-calling. It might actually carry some weight. For example, if a senator wrote an article saying senators’ salaries should be increased, one could respond:
Of course he would say that. He’s a senator.
This wouldn’t refute the author’s argument, but it may at least be relevant to the case. It’s still a very weak form of disagreement, though. If there’s something wrong with the senator’s argument, you should say what it is; and if there isn’t, what difference does it make that he’s a senator?
I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show.
If even widely read bloggers like EY don’t qualify to affect your opinions, it sounds as though you’re ignoring almost everyone.
No one is expecting you to adopt their priors… Just read and make arguments about ideas instead of people, if you’re trying to make an inference about ideas.
If even widely read bloggers like EY don’t qualify to affect your opinions, it sounds as though you’re ignoring almost everyone.
I think you discarded one of conditionals. I read Bruce Schneier’s blog. Or Paul Graham’s. Furthermore, it is not about disagreement with the notion of AI risk. It’s about keeping the data non cherry picked, or less cherry picked.
Oh please. There’s a difference between what makes a useful heuristic for you to decide what to spend time considering and what makes for a persuasive argument in a large debate where participants are willing to spend time hashing out specifics.
http://paulgraham.com/disagree.html
If even widely read bloggers like EY don’t qualify to affect your opinions, it sounds as though you’re ignoring almost everyone.
No one is expecting you to adopt their priors… Just read and make arguments about ideas instead of people, if you’re trying to make an inference about ideas.
I think you discarded one of conditionals. I read Bruce Schneier’s blog. Or Paul Graham’s. Furthermore, it is not about disagreement with the notion of AI risk. It’s about keeping the data non cherry picked, or less cherry picked.