I agree with most of this. I think it is plausible that the value of a scenario is in some sense upper-bounded by its description length, so that we need on the order of googolplex bits to describe a googolplex of value.
We can separately ask if this solves the problem. One may want a theory which solves the problem regardless of utility function; or, aiming lower, one may be satisfied to find a class of utility functions which seem to capture human intuition well enough.
Upper-bounding utility by description complexity doesn’t actually capture the intuition, since a simple universe could give rise to many complex minds.
I agree with most of this. I think it is plausible that the value of a scenario is in some sense upper-bounded by its description length, so that we need on the order of googolplex bits to describe a googolplex of value.
We can separately ask if this solves the problem. One may want a theory which solves the problem regardless of utility function; or, aiming lower, one may be satisfied to find a class of utility functions which seem to capture human intuition well enough.
Upper-bounding utility by description complexity doesn’t actually capture the intuition, since a simple universe could give rise to many complex minds.