New Technical is a bit too technical me, so at the book’s recommendation I read An Untrollable Mathematician Illustrated instead and got a cool lesson on the work done to bring together probability theory and logical induction. I’m in this weird spot where I know more math than the vast majority of people but vastly less math than e.g. the researchers at MIRI. And so when I read posts about MIRI’s research and the mathematics of AI alignment I’m either bored or hopelessly lost within two paragraphs.
I expect your response to be common, and therefore have begun to wonder how the heck Technical Explanation got into the book. Did the people who upvoted it really read it? Did they get anything out of it?
I’m curious whether Radical Probabilism did more for you. I think of it as the better attempt at the same thing, IE, communicating the insights of logical induction for broader bayesian rationality.
I expect your response to be common, and therefore have begun to wonder how the heck Technical Explanation got into the book. Did the people who upvoted it really read it? Did they get anything out of it?
I’m curious whether Radical Probabilism did more for you. I think of it as the better attempt at the same thing, IE, communicating the insights of logical induction for broader bayesian rationality.