In section 2, you say:
Unfortunately you can’t solve most LPPs this way [...]
By solving most LPPs, do you mean writing a general-purpose agent program that correctly maximizes its utility function under most LPPs? I tried to write a program to see if I could show a counterexample, but got stuck when it came to defining what exactly a solution would consist of.
Does the agent get to know N? Can we place a lower bound on N to allow the agent to time to parse the problem and become aware of its actions? Otherwise, wouldn’t low N values force failure for any non-trivial agent?
Hi! I’m Patrick Shields, an 18-year-old computer science student who loves AI, rationality and musical theater. I’m happy I finally signed up—thanks for the reminder!