[Question] Least-problematic Resource for learning RL?

Well, Sutton & Barto is the standard choice, but:

Superficial, not comprehensive, somewhat outdated circa 2018; a good chunk was focused on older techniques I never/​rarely read about again, like SARSA and exponential feature decay for credit assignment. The closest I remember them getting to DRL was when they discussed the challenges faced by function approximators.

And also has some issues with eg claiming that the Reward is the optimization target. Other RL textbooks also seem similarly problematic—very outdated, with awkward language /​ conceptual confusions.

OpenAI’s Spinning Up DRL seems better in the not-being-outdated front, but feels quite high-level, focusing mostly on practicality & implementation—while I’m looking also for a more theoretical discussion of RL.

I’m starting to think that there probably isn’t such a resource fitting all my bills, so I’m considering the mix of (1) lightly reading textbooks for old-RL theory discussions and (2) instead covering modern surveys for catching up to the recent DRL stuff.

Are there any resources for learning RL that doesn’t contain (any of) the problems I’ve mentioned above? Would like to know if I’m missing any.

No comments.