I’m not down or upvoting, but I will say, I hope you’re not taking this exercise too seriously...
Are we really going to analyze one person’s fiction (even if rationalist, it’s still fiction), in an attempt to gain insight into this one person’s attempt to model an entire society and its market predictions – and all of this in order to try and better judge the probability of certain futures under a number of counterfactual assumptions? Could be fun, but I wouldn’t give its results much credence.
Don’t forget Yudkowsky’s own advice about not generalizing from fictional evidence and being wary of anchoring. If I had to guess, some of his use of fiction is just an attempt to provide alternative framings and anchors to those thrust on us by popular media (more mainstream TV shows, movies etc). That doesn’t mean we should hang on his every word though.
Yeah, I think the level of seriousness is basically the same as if someone asked Eliezer “what’s a plausible world where humanity solves alignment?” to which the reply would be something like “none unless my assumptions about alignment are wrong, but here’s an implausible world where alignment is solved despite my assumptions being right!”
The implausible world is sketched out in way too much detail, but lots of usefulness points are lost by its being implausible. The useful kernel remaining is something like “with infinite coordination capacity we could probably solve alignment” plus a bit because Eliezer fiction is substantially better for your epistemics than other fiction. Maybe there’s an argument for taking it even less seriously? That said, I’ve definitely updated down on the usefulness of this given the comments here.
I’m not down or upvoting, but I will say, I hope you’re not taking this exercise too seriously...
Are we really going to analyze one person’s fiction (even if rationalist, it’s still fiction), in an attempt to gain insight into this one person’s attempt to model an entire society and its market predictions – and all of this in order to try and better judge the probability of certain futures under a number of counterfactual assumptions? Could be fun, but I wouldn’t give its results much credence.
Don’t forget Yudkowsky’s own advice about not generalizing from fictional evidence and being wary of anchoring. If I had to guess, some of his use of fiction is just an attempt to provide alternative framings and anchors to those thrust on us by popular media (more mainstream TV shows, movies etc). That doesn’t mean we should hang on his every word though.
Yeah, I think the level of seriousness is basically the same as if someone asked Eliezer “what’s a plausible world where humanity solves alignment?” to which the reply would be something like “none unless my assumptions about alignment are wrong, but here’s an implausible world where alignment is solved despite my assumptions being right!”
The implausible world is sketched out in way too much detail, but lots of usefulness points are lost by its being implausible. The useful kernel remaining is something like “with infinite coordination capacity we could probably solve alignment” plus a bit because Eliezer fiction is substantially better for your epistemics than other fiction. Maybe there’s an argument for taking it even less seriously? That said, I’ve definitely updated down on the usefulness of this given the comments here.