You are wrong. That is, I think you are wrong. That is, I see value in saying you are wrong. This suggests I expect my argument to be convincing. But I’m not giving an argument. This is unconventional, and suggests I deliberately do not intend to give an argument. This would normally cause my point to be dismissed; thus I believe there to be an obvious reason why my point would stand up in the face of this absence of evidence. A likely candidate seems to be Special Authoral Foreknowledge, which I have cited before.
I said you were wrong in response to a detailed post by you. Thus, I have already accounted for your arguments and did not find them convincing. Having already brought to mind Special Authoral Foreknowledge, the lack of engagement with the question suggests that answering it would require citing Special Authoral Foreknowledge, which I do not want to do [and which you should not want me to do]. Nonetheless, I believe that revealing that your theory is incorrect due to conflict with SAF is not in itself spoilerful. [Also, none of this is unambiguous but I believe it to be bolstered to prominence by reading “Nope.”]
The simplest explanation for choosing a career in existential risk reduction is that it makes not building a humanity-saving superintelligent AI a virtue instead of a failure. Not that there’s anything wrong with failing every now and then.
On the plus side, you seem to be saying what you mean now instead of spouting nonsense about the characters. On the minus, Eliezer still wants to build a humanity-saving AI if he can; but he explicitly said, “I don’t know how to do this yet.” See also.
This is rationalist evidence??
It says a lot of things. Let’s unpack.
You are wrong. That is, I think you are wrong. That is, I see value in saying you are wrong. This suggests I expect my argument to be convincing. But I’m not giving an argument. This is unconventional, and suggests I deliberately do not intend to give an argument. This would normally cause my point to be dismissed; thus I believe there to be an obvious reason why my point would stand up in the face of this absence of evidence. A likely candidate seems to be Special Authoral Foreknowledge, which I have cited before.
I said you were wrong in response to a detailed post by you. Thus, I have already accounted for your arguments and did not find them convincing. Having already brought to mind Special Authoral Foreknowledge, the lack of engagement with the question suggests that answering it would require citing Special Authoral Foreknowledge, which I do not want to do [and which you should not want me to do]. Nonetheless, I believe that revealing that your theory is incorrect due to conflict with SAF is not in itself spoilerful. [Also, none of this is unambiguous but I believe it to be bolstered to prominence by reading “Nope.”]
In conclusion, Draco is obviously Voldemort.
You might review the concept of Bayesian evidence. A lot of things happen to be evidence.
Good point. The above one-word replies are weak evidence in favor of my hypothesis.
Yep.
OK
The simplest explanation for choosing a career in existential risk reduction is that it makes not building a humanity-saving superintelligent AI a virtue instead of a failure. Not that there’s anything wrong with failing every now and then.
On the plus side, you seem to be saying what you mean now instead of spouting nonsense about the characters. On the minus, Eliezer still wants to build a humanity-saving AI if he can; but he explicitly said, “I don’t know how to do this yet.” See also.