I liked Robby’s introduction to the book overall, but I find it somewhat ironic that right after the prologue where Eliezer mentions that one of his biggest mistakes in writing the Sequences was focusing on abstract philosophical problems that are removed from people’s daily problems, the introduction begins with
Imagine reaching into an urn that contains seventy white balls and thirty red ones, and plucking out ten mystery balls.
The first (though not necessarily best) example of how to rewrite this in less abstract form that comes to mind would be something like “Imagine that you’re standing by the entrance of a university whose students are seven tenths female and three tenths male, and observing ten students go in...”; with the biased example being “On the other hand, suppose that you happen to be standing by the entrance of the physics department, which is mostly male even though the university in general is mostly female.”
Some unnecessary technical jargon that could have been gotten rid of also caught my eye in the first actual post: e.g. “Rational agents make decisions that maximize the probabilistic expectation of a coherent utility function” could have been rewritten to be more broadly understandable, e.g. “rational agents make decisions that are the most likely to produce the kinds of outcomes they’d like to see”.
I could spend some time making notes of these kinds of things and offering suggested rewrites for making the printed book more broadly accessible—would MIRI be interested in that, or would they prefer to keep the content as is?
Part of the idea behind the introduction is to replace an early series of posts: “Statistical Bias”, “Inductive Bias”, and Priors as Mathematical Objects. These get alluded to various times later in the sequences, and the posts ‘An Especially Elegant Evolutionary Psychology Project’, ‘Where Recursive Justification Hits Bottom’, and ‘No Universally Compelling Arguments’ all call back to the urn example. That said, I do think a more interesting example (whether or not it’s more ‘ordinary’ and everyday) would be a better note to start the book on.
Do feel free to send stylistic or substantive change ideas to errata@intelligence.org, not just spelling errors.
This came to mind for me as well. This, from Burdensome Details, popped out at me: “Moreover, they would need to add absurdities—where the absurdity is the log probability, so you can add it—rather than averaging them.” All this does for me is pattern-match to a Wikipedia article I once read about the concept of entropy in information theory; I don’t really know what it means in any precise sense or why it might be true. And the essay even seems to stand on its own without that part. I’ve come to ignore my fear of not understanding things unless I don’t understand pretty much everything I’m reading, but I think a lot of people would get scared that they didn’t know enough to read the book and just stop reading.
Come to think of it, we could collect proposed rewrites / deletions to some wiki page: this seems suitable for a communal effort. The “deletions” wouldn’t actually need to be literal deletions, they could just be moved into a footnote. E.g. in the Burdensome Details article, a footnote saying something like “technically, you can measure probabilities by logarithms and...”
I like the idea of turning a lot of these jargony asides, especially early in the book, into footnotes. We’ll be needing to make heavier use of footnotes anyway in order to explicitly direct people to other parts of the series in places where there will no longer be a clickable link. (Though we won’t do this for most clickable links, just for the especially interesting / important ones.)
You’re welcome to use a wiki page to list suggested changes, or a Google Doc; or just send a bunch of e-mails to errata@intelligence.org with ideas.
I liked Robby’s introduction to the book overall, but I find it somewhat ironic that right after the prologue where Eliezer mentions that one of his biggest mistakes in writing the Sequences was focusing on abstract philosophical problems that are removed from people’s daily problems, the introduction begins with
The first (though not necessarily best) example of how to rewrite this in less abstract form that comes to mind would be something like “Imagine that you’re standing by the entrance of a university whose students are seven tenths female and three tenths male, and observing ten students go in...”; with the biased example being “On the other hand, suppose that you happen to be standing by the entrance of the physics department, which is mostly male even though the university in general is mostly female.”
Some unnecessary technical jargon that could have been gotten rid of also caught my eye in the first actual post: e.g. “Rational agents make decisions that maximize the probabilistic expectation of a coherent utility function” could have been rewritten to be more broadly understandable, e.g. “rational agents make decisions that are the most likely to produce the kinds of outcomes they’d like to see”.
I could spend some time making notes of these kinds of things and offering suggested rewrites for making the printed book more broadly accessible—would MIRI be interested in that, or would they prefer to keep the content as is?
Part of the idea behind the introduction is to replace an early series of posts: “Statistical Bias”, “Inductive Bias”, and Priors as Mathematical Objects. These get alluded to various times later in the sequences, and the posts ‘An Especially Elegant Evolutionary Psychology Project’, ‘Where Recursive Justification Hits Bottom’, and ‘No Universally Compelling Arguments’ all call back to the urn example. That said, I do think a more interesting example (whether or not it’s more ‘ordinary’ and everyday) would be a better note to start the book on.
Do feel free to send stylistic or substantive change ideas to errata@intelligence.org, not just spelling errors.
This came to mind for me as well. This, from Burdensome Details, popped out at me: “Moreover, they would need to add absurdities—where the absurdity is the log probability, so you can add it—rather than averaging them.” All this does for me is pattern-match to a Wikipedia article I once read about the concept of entropy in information theory; I don’t really know what it means in any precise sense or why it might be true. And the essay even seems to stand on its own without that part. I’ve come to ignore my fear of not understanding things unless I don’t understand pretty much everything I’m reading, but I think a lot of people would get scared that they didn’t know enough to read the book and just stop reading.
Come to think of it, we could collect proposed rewrites / deletions to some wiki page: this seems suitable for a communal effort. The “deletions” wouldn’t actually need to be literal deletions, they could just be moved into a footnote. E.g. in the Burdensome Details article, a footnote saying something like “technically, you can measure probabilities by logarithms and...”
I like the idea of turning a lot of these jargony asides, especially early in the book, into footnotes. We’ll be needing to make heavier use of footnotes anyway in order to explicitly direct people to other parts of the series in places where there will no longer be a clickable link. (Though we won’t do this for most clickable links, just for the especially interesting / important ones.)
You’re welcome to use a wiki page to list suggested changes, or a Google Doc; or just send a bunch of e-mails to errata@intelligence.org with ideas.