At that point, you have saturated your available parallelism and Amdahl’s law rules. [...] Then you just have multiple brains.
I think the point (or in any case my takeaway) is that this might be Giant Cheesecake Fallacy. Initially, there’s not enough hardware for running just a single em on the whole cluster to become wasteful, so this is what happens instead of running more ems slower, since serial work is more valuable. By the time you run into the limits of how much one em can be parallelized, the parallelized ems have long invented a process for making their brains bigger, making use of more nodes, preserving the regime of there only being a few ems who run on most of the hardware. This is more about personal identity of the ems than computing architecture, as a way of “making brains bigger” may well look like “multiple brains”, they are just brains of a single em, not multiple ems or multiple instances of an em.
I think the point (or in any case my takeaway) is that this might be Giant Cheesecake Fallacy. Initially, there’s not enough hardware for running just a single em on the whole cluster to become wasteful, so this is what happens instead of running more ems slower, since serial work is more valuable. By the time you run into the limits of how much one em can be parallelized, the parallelized ems have long invented a process for making their brains bigger, making use of more nodes, preserving the regime of there only being a few ems who run on most of the hardware. This is more about personal identity of the ems than computing architecture, as a way of “making brains bigger” may well look like “multiple brains”, they are just brains of a single em, not multiple ems or multiple instances of an em.