Why isn’t the following decision theory optimal?

I’ve re­cently read the de­ci­sion the­ory FAQ, as well as Eliezer’s TDT pa­per. When read­ing the TDT pa­per, a sim­ple de­ci­sion pro­ce­dure oc­curred to me which as far as I can tell gets the cor­rect an­swer to ev­ery tricky de­ci­sion prob­lem I’ve seen. As dis­cussed in the FAQ above, ev­i­den­tial de­ci­sion the­ory get’s the chew­ing gum prob­lem wrong, causal de­ci­sion the­ory gets New­comb’s prob­lem wrong, and TDT gets coun­ter­fac­tual mug­ging wrong.

In the TDT pa­per, Eliezer pos­tu­lates an agent named Glo­ria (page 29), who is defined as an agent who max­i­mizes de­ci­sion-de­ter­mined prob­lems. He de­scribes how a CDT-agent named Reena would want to trans­form her­self into Glo­ria. Eliezer writes

By Glo­ria’s na­ture, she always already has the de­ci­sion-type causal agents wish they had, with­out need of pre­com­mit­ment.

Eliezer then later goes on the de­velop TDT, which is sup­posed to con­struct Glo­ria as a byproduct.

Glo­ria, as we have defined her, is defined only over com­pletely de­ci­sion-de­ter­mined prob­lems of which she has full knowl­edge. How­ever, the agenda of this manuscript is to in­tro­duce a for­mal, gen­eral de­ci­sion the­ory which re­duces to Glo­ria as a spe­cial case.

Why can’t we in­stead con­struct Glo­ria di­rectly, us­ing the idea of the thing that CDT agents wished they were? Ob­vi­ously we can’t just pos­tu­late a de­ci­sion al­gorithm that we don’t know how to ex­e­cute, and then note that a CDT agent would wish they had that de­ci­sion al­gorithm, and pre­tend we had solved the prob­lem. We need to be able to de­scribe the ideal de­ci­sion al­gorithm to a level of de­tail that we could the­o­ret­i­cally pro­gram into an AI.

Con­sider this de­ci­sion al­gorithm, which I’ll tem­porar­ily call Name­less De­ci­sion The­ory (NDT) un­til I get feed­back about whether it de­serves a name: you should always make the de­ci­sion that a CDT-agent would have wished he had pre-com­mit­ted to, if he had pre­vi­ously known he’d be in his cur­rent situ­a­tion and had the op­por­tu­nity to pre­com­mit to a de­ci­sion.

In effect, you are mak­ing an gen­eral pre­com­mitt­ment to be­have as if you made all spe­cific pre­com­mit­ments that would ever be ad­van­ta­geous to you.

NDT is so sim­ple, and Eliezer comes so close to stat­ing it in his dis­cus­sion of Glo­ria, that I as­sume there is some flaw with it that I’m not see­ing. Per­haps NDT does not count as a “real”/​”well defined” de­ci­sion pro­ce­dure, or can’t be for­mal­ized for some rea­son? Even so, it does seem like it’d be pos­si­ble to pro­gram an AI to be­have in this way.

Can some­one give an ex­am­ple of a de­ci­sion prob­lem for which this de­ci­sion pro­ce­dure fails? Or for which there are mul­ti­ple pos­si­ble pre­com­mit­ments that you would have wished you’d made and it’s not clear which one is best?

EDIT: I now think this defi­ni­tion of NDT bet­ter cap­tures what I was try­ing to ex­press: You should always make the de­ci­sion that a CDT-agent would have wished he had pre­com­mit­ted to, if he had pre­vi­ously con­sid­ered the pos­si­bil­ity of his cur­rent situ­a­tion and had the op­por­tu­nity to costlessly pre­com­mit to a de­ci­sion.