Here is why I believe that reading the Sequences might not be worth the effort:
1) According to your survey, 38.5% of all people have read at least 75% of the Sequences yet only 16.5% think that unfriendly AI is the most fearsome existential risk.
2) The following (smart) people have read the Sequences, and more, but do not agree about risks from AI:
According to your survey, 38.5% of all people have read at least 75% of the Sequences yet only 16.5% think that unfriendly AI is the most fearsome existential risk.
So what? I’m not even sure that Eliezer himself considers uFAI the most likely source of extinction. It’s just that Friendly AI would help save us from most the other possible sources of extinction too (not just from uFAI), and from several other sources of suffering too (not just extinction), so it kills multiple birds with one stone to figure it out.
As a point of note, I myself didn’t place uFAI as the most likely existential risk in that survey. That doesn’t mean I share your attitude.
No, but in the light of an expected utility calculation. Why would I read the Sequences?
Assuming you continue to write posts authoritatively about subjects related to said sequences—including criticisms of the contents therein—having read the sequences may reduce the frequency of you humiliating yourself.
They contain many insights unrelated to AI (looking at the sequences wiki page, it seems that most AI-ish things are concentrated in the second half). And many people had fun reading them. I think it would be a better use of time than trying to generically improve your math education that you speak of elsewhere (I don’t think it makes sense to learn math as an instrumental goal without a specific application in mind—unless you simply like math, in which case knock yourself out).
From a theoretical standpoint, you should never expect that observing something will shift your beliefs in some particular direction (and, guess what, there’s a post about that). This doesn’t work for humans—we can be convinced of things and we can expect to be convinced even if we don’t want to. But then, the fact that the sequences fail to convince many people shouldn’t be an argument against reading them. At least now you can be sure that they’re safe to read and won’t brainwash you.
(Addendum to my other comment)
Here is why I believe that reading the Sequences might not be worth the effort:
1) According to your survey, 38.5% of all people have read at least 75% of the Sequences yet only 16.5% think that unfriendly AI is the most fearsome existential risk.
2) The following (smart) people have read the Sequences, and more, but do not agree about risks from AI:
Robin Hanson
Katja Grace (who has been a visiting fellow)
John Baez (who interviews Eliezer Yudkowsky)
Holden Karnofsky
Ben Goertzel
So what? I’m not even sure that Eliezer himself considers uFAI the most likely source of extinction. It’s just that Friendly AI would help save us from most the other possible sources of extinction too (not just from uFAI), and from several other sources of suffering too (not just extinction), so it kills multiple birds with one stone to figure it out.
As a point of note, I myself didn’t place uFAI as the most likely existential risk in that survey. That doesn’t mean I share your attitude.
I hope I didn’t claim the Sequences, or any argument were 100% effective in changing the mind of every single person who read them.
Also, Ben Goertzel has read all the Sequences? That makes that recent conversation with Luke kind of sad.
No, but in the light of an expected utility calculation. Why would I read the Sequences?
Assuming you continue to write posts authoritatively about subjects related to said sequences—including criticisms of the contents therein—having read the sequences may reduce the frequency of you humiliating yourself.
They contain many insights unrelated to AI (looking at the sequences wiki page, it seems that most AI-ish things are concentrated in the second half). And many people had fun reading them. I think it would be a better use of time than trying to generically improve your math education that you speak of elsewhere (I don’t think it makes sense to learn math as an instrumental goal without a specific application in mind—unless you simply like math, in which case knock yourself out).
From a theoretical standpoint, you should never expect that observing something will shift your beliefs in some particular direction (and, guess what, there’s a post about that). This doesn’t work for humans—we can be convinced of things and we can expect to be convinced even if we don’t want to. But then, the fact that the sequences fail to convince many people shouldn’t be an argument against reading them. At least now you can be sure that they’re safe to read and won’t brainwash you.