U(E)=U(A) is what we desire. That is what the filter is designed to achieve: it basically forces the AI to act as though the explosives will never detonate (by considering the outcome of a successful detonation to be the same as a failed detonation). The idea is to ensure that the AI ignores the possibility of being blown up, so that it does not waste resources on disarming the explosives—and can then be blown up. Difficult, but very useful if it works.
The rest of the post is (once you wade through the notation) dealing with the situation where there are several different ways in which each outcome can be realised, and the mathematics of the utility filter in this case.
U(E)=U(A) is what we desire. That is what the filter is designed to achieve: it basically forces the AI to act as though the explosives will never detonate (by considering the outcome of a successful detonation to be the same as a failed detonation). The idea is to ensure that the AI ignores the possibility of being blown up, so that it does not waste resources on disarming the explosives—and can then be blown up. Difficult, but very useful if it works.
The rest of the post is (once you wade through the notation) dealing with the situation where there are several different ways in which each outcome can be realised, and the mathematics of the utility filter in this case.
Exactly.