I pointed out that the concluding exhortation misses the mark.
It absolutely does. I sacrificed some precision for clarity so that I could end with a ringing exhortation. When I have a moment I’ll probably footnote this.
Honestly, part of me is still a little confused about what I’m supposed to do at the ends of essays other than stop talking when I’ve said all the stuff I have to say.
ETA: On further reflection, the exhortation is almost right. The target you want to optimize for is “outcome in which money is worth more” but “outcome I’d really hate” is a cheaper target to compute—it’s emotionally salient, and can be quickly processed, probably in parallel—while still being a decent pointer to the true target—you can use a deliberative, serial process afterwards to pick the outcomes you actually should bet on.
The target you want to optimize for is “outcome in which money is worth more” but “outcome I’d really hate” is a cheaper target to compute—it’s emotionally salient, and can be quickly processed, probably in parallel—while still being a decent pointer to the true target—you can use a deliberative, serial process afterwards to pick the outcomes you actually should bet on.
It absolutely does. I sacrificed some precision for clarity so that I could end with a ringing exhortation. When I have a moment I’ll probably footnote this.
Honestly, part of me is still a little confused about what I’m supposed to do at the ends of essays other than stop talking when I’ve said all the stuff I have to say.
ETA: On further reflection, the exhortation is almost right. The target you want to optimize for is “outcome in which money is worth more” but “outcome I’d really hate” is a cheaper target to compute—it’s emotionally salient, and can be quickly processed, probably in parallel—while still being a decent pointer to the true target—you can use a deliberative, serial process afterwards to pick the outcomes you actually should bet on.
Exactly!