In hindsight, it totally would have been better if LessWrong had engaged more with AI: A Modern Approach, though for somewhat different reasons. (I’m not saying anything not implicit in the standard textbooks as far as I know, though.)
I think it would’ve been good because it would have pushed things in a more technical direction, and helped formalise a bunch of our ideas regarding planning, search spaces, and reasoning. Personally I really enjoyed my uni intro to AI class on that textbook for reading things like “The way we form a heuristic in this type of search is by relaxing the constraints of the problem—making the problem easy enough that we can compute an answer quickly” ← these ideas helped me think about my own heuristics.
Related to the more technical direction, some of my favourite EY writings are the more technical ones like InEq, QM, Words, and Technical Explanation, because they’re able to communicate the core insight so crisply.
Was that what you had in mind?
I was thinking specifically about engagement with the details of the narrative and content of what’s currently called “AI research,” in addition to the abstract idea of general intelligence.