The fact that this was completely ignored is a little disappointing. This is a very important question that would help put upper bounds to value drift, but it seems that answering it limits the imagination when it comes to ASI. Has there ever been an answer to it?
I have a feeling larger brains have a higher coordination problem between its subcomponents, especially when you hit information transfer limits. This would put some hard limits on how much you can scale intelligence but I may be wrong.
A fermi estimate on the upper bounds of intelligence may eliminate some problem classes alignment arguments tend to include.
The fact that this was completely ignored is a little disappointing.
You seem to reply to your previous shortform post but these do not naturally show up below each other. If you want to thread them it is probably better to reply to yourself.
The fact that this was completely ignored is a little disappointing. This is a very important question that would help put upper bounds to value drift, but it seems that answering it limits the imagination when it comes to ASI. Has there ever been an answer to it?
I have a feeling larger brains have a higher coordination problem between its subcomponents, especially when you hit information transfer limits. This would put some hard limits on how much you can scale intelligence but I may be wrong.
A fermi estimate on the upper bounds of intelligence may eliminate some problem classes alignment arguments tend to include.
You seem to reply to your previous shortform post but these do not naturally show up below each other. If you want to thread them it is probably better to reply to yourself.
That is very weird and probably a bug. This isn’t supposed to be on my short form 😅
This appears to be someone else’s shortform, which was edited so that the shortform container doesn’t look like a shortform container anymore.
No. This is how the ShortForm is supposed to work. The comments on the ShortForm “post” are like tweets.