I like the sentiment of this post that “noticing other people is wrong does not make you right”, which is very close to a sentiment I’ve been thinking about, that it should be easier to point out uncertainty without providing replacement certainty. I feel this especially about AI risk, I often wanted to point out other peoples lack of knowledge about how to proceed with AI, even though I had no replacement plan. The general idea is people shouldn’t need to know X as a requirement to claim that another person does not know X.
Please let me know if you have examples along these lines where I seemed dumber than I’m presenting here.
I think you already updated on this, but iirc, a few years back you seemed unpleasantly complacent endorsing large AI extinction risks because they were unavoidable, rather than looking for ways to avoid them. My two biggest concerns right now are (1) people not understanding AI extinction risk, and (2) people not seeking to avoid AI extinction risk. Particularly, there seems to be a trend of shifting to worse alignment strategies that might work in shorter timelines based on perceived shortening of available time rather than seeking strategies to gain the time to employ better strategies. It might be because I under-estimate the difficulty of an international AGI pause treaty or that I overestimate the difficulty of making AGI go well, but seeking an international pause seems significantly undervalued.
I don’t know why I believe this, but I believe it anyway
Ha, I like this and think people should be more encouraged to do it, but I might phrase it more like “The models in my mind seem to indicate this, I don’t have time to fully investigate and explain those models right now”.
[On telling people about AI risk] by the way, I know I’m telling you a lot of crazy stuff; you should take as long as it takes to evaluate all of this on your own;
I get tripped up on this one because I spent the last 10 or so years considering AI and AI alignment and I kinda want to say “look, we don’t really have time for you to understand if it is going to take you as long as it took me to understand”… but of course that’s not a very compelling argument, so I’m forced to seek ways of explaining things that can get others on the same page faster than it took me. It is important for us to be both correct and convincing, and that is difficult to do.
I like the sentiment of this post that “noticing other people is wrong does not make you right”, which is very close to a sentiment I’ve been thinking about, that it should be easier to point out uncertainty without providing replacement certainty. I feel this especially about AI risk, I often wanted to point out other peoples lack of knowledge about how to proceed with AI, even though I had no replacement plan. The general idea is people shouldn’t need to know X as a requirement to claim that another person does not know X.
I think you already updated on this, but iirc, a few years back you seemed unpleasantly complacent endorsing large AI extinction risks because they were unavoidable, rather than looking for ways to avoid them. My two biggest concerns right now are (1) people not understanding AI extinction risk, and (2) people not seeking to avoid AI extinction risk. Particularly, there seems to be a trend of shifting to worse alignment strategies that might work in shorter timelines based on perceived shortening of available time rather than seeking strategies to gain the time to employ better strategies. It might be because I under-estimate the difficulty of an international AGI pause treaty or that I overestimate the difficulty of making AGI go well, but seeking an international pause seems significantly undervalued.
Ha, I like this and think people should be more encouraged to do it, but I might phrase it more like “The models in my mind seem to indicate this, I don’t have time to fully investigate and explain those models right now”.
I get tripped up on this one because I spent the last 10 or so years considering AI and AI alignment and I kinda want to say “look, we don’t really have time for you to understand if it is going to take you as long as it took me to understand”… but of course that’s not a very compelling argument, so I’m forced to seek ways of explaining things that can get others on the same page faster than it took me. It is important for us to be both correct and convincing, and that is difficult to do.