I didn’t know that about Bayesian inference-ish updating baking in an Occam-ish prior. Does it need to be complexity penalizing, or would any consistent prior-choosing rule work? I assume the former from the phrasing.
Why is that? “does not much constrain the end results” could just mean that unless we assume the agent is Occam ish, then we can’t tell from its posteriors whether it did Bayesian inference or something else. But I don’t see why that couldn’t be true of some non-Occam-ish prior picking rule, as long as we knew what that was.
I think this definition includes agents that only cared about their sensory inputs, since sensory inputs are a subset of states of the world.
This makes me think that the definition of economic agent that I googled isn’t what was meant, since this one seems to be primarily making a claim about efficiency, rather than about impacting markets (“an agent who is part of the economy”). Something more like homo economicus?
Naturalistic agents seems to have been primarily a claim about the situation that agent finds itself in, rather than a claim about that agents’ models (eg, a cartesian dualist which was in fact embedded in a universe made of atoms and was itself made of atoms, would still be a “naturalistic agent”, I think)
The last point reminds me of Dawkins style extended phenotypes; not sure how analogous/comparable that concept is. I guess it makes me want to go back and figure out if we defined what “an agent” was. So like does a beehive count as “an agent” (I believe that conditioned on it being an agent at all, it would be a naturalized agent)?
...does Arbital have search functionality right now? Maybe not :-/
Examples of ‘strong, general optimization pressures’? Maybe the sorts of things in that table from Superintelligence. ?Optimization pressure = something like a selective filter, where “strong” means that it was strongly selected for? And maybe the reason to say ‘optimization’ is to imply that there was a trait that was selected for, strongly, in the same direction (or towards the same narrow target, more like?) for many “generations”. Mm, or that all the many different elements of the agent were built towards that trait, with nothing else being a strong competitor. And then “general” presumably is doing something like the work that it does in “general intelligence”, ie, not narrow? Ah, a different meaning would be that the agent has been subject to strong pressures towards being a ‘general optimizer’. Seems less strongly implied by the grammar, but doesn’t create any obvious meaningful interpretive differences.
Oh, or “general” could mean “along many/all axes”. So, optimization pressure that is strong, and along many axes. Which fails to specify a set of axes that are relevant, but that doesn’t seem super problematic at this moment.
It’s not obvious to me that filtering for agents powerful enough to be relevant will leave mainly agents who’ve been subjected to strong general optimization pressures. For example the Limited Genie described on the Advanced Agent page maybe wasn’t?
For self optimization, I assume this is broadly because of the convergent instrumental values claim?