E(U(there exists an agent A maximising U) ≥ E(U(there exists an agent A satisficing U)
It’s a good idea to define your symbols and terminology in general before (or right after) using them. Presumably U is utility, but what it E? Expectation value? How do you calculate it? What is an agent? How do you calculate utility of an existential quantifier? If this is all common knowledge, at least give a relevant link. Oh, and it is also a good idea to prove or at least motivate any non-trivial formula you present.
Feel free to make your post (which apparently attempts to make an interesting point) more readable for the rest of us (i.e. newbies like me).
Reworded somewhat. E is expectation value, as is now stated; it does not need to calculated, we just need to know that a maximiser will always make the decision that maximises the expected value of U, while a satisficer may sometimes make a different decision; hence the presence of a U-maximiser increases the expected value of U over the presence of an otherwise equivalent U-satisficer.
An agent is “An entity which is capable of Action)”; an AI or human being or collection of neurons that can do stuff. It’s a general term here, so I didn’t define it.
It’s a good idea to define your symbols and terminology in general before (or right after) using them. Presumably U is utility, but what it E? Expectation value? How do you calculate it? What is an agent? How do you calculate utility of an existential quantifier? If this is all common knowledge, at least give a relevant link. Oh, and it is also a good idea to prove or at least motivate any non-trivial formula you present.
Feel free to make your post (which apparently attempts to make an interesting point) more readable for the rest of us (i.e. newbies like me).
Reworded somewhat. E is expectation value, as is now stated; it does not need to calculated, we just need to know that a maximiser will always make the decision that maximises the expected value of U, while a satisficer may sometimes make a different decision; hence the presence of a U-maximiser increases the expected value of U over the presence of an otherwise equivalent U-satisficer.
An agent is “An entity which is capable of Action)”; an AI or human being or collection of neurons that can do stuff. It’s a general term here, so I didn’t define it.