“incentivized to build intuitive self-models” does not necessarily imply “does in fact build intuitive self-models”. As I wrote in §1.4.1, just because a learning algorithm is incentivized to capture some pattern in its input data, doesn’t mean it actually will succeed in doing so.
Right of course. So would this imply that organisms that have very simple brains / roles in their environment (for example: not needing to end up with a flexible understanding of the consequences of your actions), would have a very weak incentive too?
And if an intuitive self model helps with things like flexible planning then even though its a creation of the ‘blank-slate’ cortex, surely some organisms would have a genome that sets up certain hyperparameters that would encourage it no, since it would seem strange for something pretty seriously adaptive being purely an ‘epiphenomenon’ (as per language being facilitated by hyperparameters encoded in the genome)? But also its fine if you also just don’t have an opinion on this haha. (also: wouldn’t some animals not have an incentive to create self-models if creating a self-model would not seriously increase performance in any relevant domain? Like a dog trying to create an in-depth model of the patterns that appear on computer monitors maybe)
It does seem like flexible behaviour in some general sense is perfectly possible without awareness (as I’m sure you know) but I understand that awareness would surely help a whole lot.
You might have no opinion on this at all but would you have any vague guess at all as to why you can only verbally report items in awareness? (cause even if awareness is a model of serial processing and verbal report requires that kind of global projection / high state of attention, I’ve still seen studies showing that stimuli can be globally accessible / globally projected in the brain and yet still not consciously accessible, presumably in your model due to a lack of modelling of that global-access)
Right of course. So would this imply that organisms that have very simple brains / roles in their environment (for example: not needing to end up with a flexible understanding of the consequences of your actions), would have a very weak incentive too?
And if an intuitive self model helps with things like flexible planning then even though its a creation of the ‘blank-slate’ cortex, surely some organisms would have a genome that sets up certain hyperparameters that would encourage it no, since it would seem strange for something pretty seriously adaptive being purely an ‘epiphenomenon’ (as per language being facilitated by hyperparameters encoded in the genome)? But also its fine if you also just don’t have an opinion on this haha. (also: wouldn’t some animals not have an incentive to create self-models if creating a self-model would not seriously increase performance in any relevant domain? Like a dog trying to create an in-depth model of the patterns that appear on computer monitors maybe)
It does seem like flexible behaviour in some general sense is perfectly possible without awareness (as I’m sure you know) but I understand that awareness would surely help a whole lot.
You might have no opinion on this at all but would you have any vague guess at all as to why you can only verbally report items in awareness? (cause even if awareness is a model of serial processing and verbal report requires that kind of global projection / high state of attention, I’ve still seen studies showing that stimuli can be globally accessible / globally projected in the brain and yet still not consciously accessible, presumably in your model due to a lack of modelling of that global-access)