In general, I felt like the beginning was a bit weak, with the informal-technical discussion the weakest part, and then it got substantially stronger from there.
I worry that I particularly enjoy the kind of writing they do, but we’ve already tapped the market of folks like me. Like, I worked at MIRI and now moderate LessWrong because I was convinced by the Sequences. So that’s a pretty strong selection filter for liking their writing. Of course we should caveat my experience quite a bit given that.
But, for what it’s worth, I thought Part 2 was great. Stories make things seem real, and my reader-model was relatively able to grant the plot beats as possible. I thought they did a good job of explaining that while there were many options the AI could take, and they, the authors, might well not understand why a given approach would work out or not, it wasn’t obvious that that would generalise to all the AI’s plans not working.
The other thing I really liked: they would occassionally explain some science to expand on their point (nuclear physics is the example they expounded on at length, but IIRC they mentioned a bunch of other bit of science in passing). I’m not sure why I liked this so much. Perhaps it was because it was grounding, or reminded me not to throw my mind away, or made me trust them a little more. Again, I’m really not sure how well this generalises to people for whom their previous writing hasn’t worked.
I worry that I particularly enjoy the kind of writing they do, but we’ve already tapped the market of folks like me
Yup, hence my not being excited to see the usual rhetoric being rehearsed, instead of something novel.
The other thing I really liked: they would occassionally explain some science to expand on their point (nuclear physics is the example they expounded on at length, but IIRC they mentioned a bunch of other bit of science in passing)
In general, I felt like the beginning was a bit weak, with the informal-technical discussion the weakest part, and then it got substantially stronger from there.
I worry that I particularly enjoy the kind of writing they do, but we’ve already tapped the market of folks like me. Like, I worked at MIRI and now moderate LessWrong because I was convinced by the Sequences. So that’s a pretty strong selection filter for liking their writing. Of course we should caveat my experience quite a bit given that.
But, for what it’s worth, I thought Part 2 was great. Stories make things seem real, and my reader-model was relatively able to grant the plot beats as possible. I thought they did a good job of explaining that while there were many options the AI could take, and they, the authors, might well not understand why a given approach would work out or not, it wasn’t obvious that that would generalise to all the AI’s plans not working.
The other thing I really liked: they would occassionally explain some science to expand on their point (nuclear physics is the example they expounded on at length, but IIRC they mentioned a bunch of other bit of science in passing). I’m not sure why I liked this so much. Perhaps it was because it was grounding, or reminded me not to throw my mind away, or made me trust them a little more. Again, I’m really not sure how well this generalises to people for whom their previous writing hasn’t worked.
Yup, hence my not being excited to see the usual rhetoric being rehearsed, instead of something novel.
Yup. Chapter 10 is my favorite.