I still haven’t read most of the sequences and don’t intend to read HPMOR
That’s fine, that’s what projectlawful is for. It’s meant to be the fun thing that you can do instead of looking at TV shows and social media. I like reading it after waking up and before going to bed.
It’s ideal in a lot of ways, because it’s explicitly designed to have you learn rationality by habit/repetition without any deliberate repetitive effort e.g. taking notes, which is necessary in order to actually get good at turning rationality into extreme competence at life.
The EY self-insert character, Keltham, is much more humble and is genuinely interested in the world and people around him (many of whom are “smarter” than him or vigorously intend to surpass him). He’s not preachy, he’s an economist; and most of the rationality lessons are just him saying how things are in an alternate timeline (dath ilan), not insisting that they ought to be his way.
I definitely agree that it’s a good idea to find ways to use EY’s writings to get ahead of the curve and find new opportunities; it would save everyone a lot of time and labor to just implement the original works themselves instead of usual rewriting them in your own words and taking credit for it to advance your status. What you’ve said about summarization makes sense, but I’ve tried that and it’s a lot harder than it looks; getting rid of the lists of examples and parables makes it harder to digest the content properly. This is an extreme case, it turned the best 25 sequences into notes basically (a great alternative to rereading all 25 since you can do it every morning, but not ideal for the first read).
Maybe such a contest could also require the entrants to describe generally-valuable effective strategies to condense EY’s writings?
Is rationalism really necessary to understanding MIRI-type views on AI alignment? I personally find rationalism offputting and I don’t think it’s very persuasive to say “you have to accept a complex philosophical system and rewire your brain to process evidence and arguments differently to understand one little thing.” If that’s the deal, I don’t think you’ll find many takers outside of those already convinced.
I’m probably not processing evidence any differently from “rationalism”. But starting an argument with “your entire way of thinking is wrong” gets interpreted by the audience as “you’re stupid” and things go downhill from there.
There are definitely such people for sure. The question is whether people who don’t want to learn to process evidence correctly (because the idea of having been doing it the wrong way until now offends them) were ever going to contribute to AI alignment in the first place.
Fair point. My position is simply that, when trying to make the case for alignment, we should focus on object level arguments. It’s not a good use of our time trying to reteach philosophy when the object level arguments are the crux.
EY originally blamed failure to agree with his obviously correct arguments about AI on poor thinking skills, then set about to correct that. But other explanations are possible.
That’s fine, that’s what projectlawful is for. It’s meant to be the fun thing that you can do instead of looking at TV shows and social media. I like reading it after waking up and before going to bed.
It’s ideal in a lot of ways, because it’s explicitly designed to have you learn rationality by habit/repetition without any deliberate repetitive effort e.g. taking notes, which is necessary in order to actually get good at turning rationality into extreme competence at life.
The EY self-insert character, Keltham, is much more humble and is genuinely interested in the world and people around him (many of whom are “smarter” than him or vigorously intend to surpass him). He’s not preachy, he’s an economist; and most of the rationality lessons are just him saying how things are in an alternate timeline (dath ilan), not insisting that they ought to be his way.
I definitely agree that it’s a good idea to find ways to use EY’s writings to get ahead of the curve and find new opportunities; it would save everyone a lot of time and labor to just implement the original works themselves instead of usual rewriting them in your own words and taking credit for it to advance your status. What you’ve said about summarization makes sense, but I’ve tried that and it’s a lot harder than it looks; getting rid of the lists of examples and parables makes it harder to digest the content properly. This is an extreme case, it turned the best 25 sequences into notes basically (a great alternative to rereading all 25 since you can do it every morning, but not ideal for the first read).
Maybe such a contest could also require the entrants to describe generally-valuable effective strategies to condense EY’s writings?
Is rationalism really necessary to understanding MIRI-type views on AI alignment? I personally find rationalism offputting and I don’t think it’s very persuasive to say “you have to accept a complex philosophical system and rewire your brain to process evidence and arguments differently to understand one little thing.” If that’s the deal, I don’t think you’ll find many takers outside of those already convinced.
In what way are you processing evidence differently from “rationalism”?
I’m probably not processing evidence any differently from “rationalism”. But starting an argument with “your entire way of thinking is wrong” gets interpreted by the audience as “you’re stupid” and things go downhill from there.
There are definitely such people for sure. The question is whether people who don’t want to learn to process evidence correctly (because the idea of having been doing it the wrong way until now offends them) were ever going to contribute to AI alignment in the first place.
Fair point. My position is simply that, when trying to make the case for alignment, we should focus on object level arguments. It’s not a good use of our time trying to reteach philosophy when the object level arguments are the crux.
That’s generally true… unless both parties process the object-level arguments differently, because they have different rules for updating on evidence.
EY originally blamed failure to agree with his obviously correct arguments about AI on poor thinking skills, then set about to correct that. But other explanations are possible.
Yeah, that’s not a very persuasive story to skeptics.