That’s plausible, IDK. But are you saying that PROSPECTIVELY the PREDICTABLE-ish effects were bad? Who said “Sure you could tie together a whole bunch of existing epistemological threads, and do a bunch of new thinking, and explain AI danger very clearly and thoroughly, and yeah, you could attract a huge amount of brainpower to try to think clearly about how to derisk that, but then they’ll just all start trying to make AGI. And here’s the reasons I can actually know this.”? There might have been people starting to say this by 2015 or 2018, IDK. But in 2010? 2006?
I think it’s not an impossible call. The fiasco with Roko’s Basilisk (2010) seems like a warning that could have been heeded. It turns out that “freaking out” about something being dangerous and scary makes it salient and exciting, which in turn causes people to fixate on it in ways that are obviously counterproductive. That it becomes a mark of pride to do the dangerous thing without being scathed (as with the Demon core). Even though you warned them about this from the beginning, and in very clear terms.
And even if there was no one able to see this (it’s not like I saw it), it remains a strategic error — reality doesn’t grade on a curve.
Yes, it would be a strategic error in a sense, but it wouldn’t be a strong argument against “Yudkowsky is the best strategic thinker on AGI X-derisking”, which I was given to understand was the topic of this thread. For that specific question, which seemed to be the topic of Wei Dai’s comment, it is graded on a curve. (I don’t actually feel that interested in that question though.)
That’s plausible, IDK. But are you saying that PROSPECTIVELY the PREDICTABLE-ish effects were bad? Who said “Sure you could tie together a whole bunch of existing epistemological threads, and do a bunch of new thinking, and explain AI danger very clearly and thoroughly, and yeah, you could attract a huge amount of brainpower to try to think clearly about how to derisk that, but then they’ll just all start trying to make AGI. And here’s the reasons I can actually know this.”? There might have been people starting to say this by 2015 or 2018, IDK. But in 2010? 2006?
I think it’s not an impossible call. The fiasco with Roko’s Basilisk (2010) seems like a warning that could have been heeded. It turns out that “freaking out” about something being dangerous and scary makes it salient and exciting, which in turn causes people to fixate on it in ways that are obviously counterproductive. That it becomes a mark of pride to do the dangerous thing without being scathed (as with the Demon core). Even though you warned them about this from the beginning, and in very clear terms.
And even if there was no one able to see this (it’s not like I saw it), it remains a strategic error — reality doesn’t grade on a curve.
Yes, it would be a strategic error in a sense, but it wouldn’t be a strong argument against “Yudkowsky is the best strategic thinker on AGI X-derisking”, which I was given to understand was the topic of this thread. For that specific question, which seemed to be the topic of Wei Dai’s comment, it is graded on a curve. (I don’t actually feel that interested in that question though.)
The question doesn’t make sense. It’s not possible to judge conclusively whether something is good or bad ahead of time… only after the fact.
Because real world actions and outcomes are what counts, not what is claimed verbally or in writing.