Research Engineer, PhD Student | Technical AI Safety | https://www.linkedin.com/in/seriozh1/
Sergei Smirnov
Rational subagents will never bid more than the reward they actually expect.
One might bid higher to mislead others and benefit more in the long run.
we can resolve conflicts between beliefs and observations either by updating our beliefs, or by taking actions which make the beliefs come true
Or by ignoring the situation. I believe the agent should know that not every observation worse to consider.
I think that what active inference is missing is the ability to model strategic interactions between different goals
The strategic‑interaction gap you highlight echoes David Kirsh’s pragmatic‑vs‑epistemic distinction (I’m collaborating with him): current scale‑free frameworks capture pragmatic, world‑altering moves, yet overlook epistemic actions—internal simulations and information‑seeking that update the agent’s belief state without changing the environment—exactly where those goal negotiations seem to unfold.
“Finally, one considerable operational hindrance was that internally we didn’t have a proper tracker for stuff to do. We assumed things would be straightforward...” Could you please elaborate, giving examples of what was not straightforward and what required ad hoc meetings?