[Question] Utility functions without a maximum

An elementary question that has probably been discussed for 300 years, but I don’t know what the right keyword to use to google it might be.

How, theoretically, do you deal with (in decision theory/​AI alignment) a “noncompact” utility function, e.g. suppose your set of actions is parameterized by t in (0, 1], and U(t) = {t for t < 1, 0 for t = 1}. Which t should the agent choose?

E.g. consider: the agent gains utility f(t) from expending a resource at time t and f(t) is a (sufficiently fast-growing) increasing function. When does the agent expend the resource?

I guess the obvious answer is “such a utility function cannot exist, because the agent obviously does something, and that demonstrates what the agent’s true utility function is”, but it seems like it would be difficult to hard-code a utility function to be compact in a way that doesn’t cause the agent to be stupid.

No comments.