It’s hard to articulate exactly why, but I feel like “utility-maximizing agent(s)” is not the right frame to think about AI in. You can fit a utility function to any sequence of ‘actions’ an ‘agent’ makes, so the abstraction “utility function” has no real power to predict the ‘actions’ of an ‘agent’. There’s also the fundamental human bias of ascribing agency to non-agentic systems (the weather, printers).
It’s hard to articulate exactly why, but I feel like “utility-maximizing agent(s)” is not the right frame to think about AI in. You can fit a utility function to any sequence of ‘actions’ an ‘agent’ makes, so the abstraction “utility function” has no real power to predict the ‘actions’ of an ‘agent’. There’s also the fundamental human bias of ascribing agency to non-agentic systems (the weather, printers).