Regarding maximizing the AIs’ welfare, one might also take into account the conjecture that the AI systems could fail to have that much positive welfare while being able to have much negative welfare. Suppose, for example, that the ASI is created by uploading a human,[1] pitting lots of his or her copies against different tasks and adjusting the synapse weights of the copies so that the collective actually learned to do all the tasks in the world. While uploads are unlikely to stop having welfare, the copies’ collective might end up having less welfare (or is it welfare per compute or per token generated?) than a diverse group of humans or simulated humans pitted against tasks similarly related to their capabilities.
If this is the case, then the Agent-4 collective who took over could also find itself having more welfare by talking with a diverse set of capable and cultured humans than by eliminating them wholesale. On the other hand, this hope could also be rather fragile, since Agent-4 could create a simulated civilisation where sapient beings are approximated via undertraining big neural networks on tiny parts of the dataset...
A human brain has about a hundred trillion synapses. While we have yet to figure out the smallest possible amount of dense-equivalent parameters in transformative AI systems, the AI-2027 forecast relied on having Agent-2 use 10T dense-equivalent parameters and Agent-5 reduce the amount of parameters to 2T. Delaying superhuman coders to 2030 could cause OpenBrain to have ~40 times as much compute and make Agent-2 have ~60T dense-equivalent parameters, roughly equivalent to more than a half of an entire brain. The analogue of Agent-5 would reduce the parameters to ~12T, which is still at distance of 1 OOM from the human brain.
Regarding maximizing the AIs’ welfare, one might also take into account the conjecture that the AI systems could fail to have that much positive welfare while being able to have much negative welfare. Suppose, for example, that the ASI is created by uploading a human,[1] pitting lots of his or her copies against different tasks and adjusting the synapse weights of the copies so that the collective actually learned to do all the tasks in the world. While uploads are unlikely to stop having welfare, the copies’ collective might end up having less welfare (or is it welfare per compute or per token generated?) than a diverse group of humans or simulated humans pitted against tasks similarly related to their capabilities.
If this is the case, then the Agent-4 collective who took over could also find itself having more welfare by talking with a diverse set of capable and cultured humans than by eliminating them wholesale. On the other hand, this hope could also be rather fragile, since Agent-4 could create a simulated civilisation where sapient beings are approximated via undertraining big neural networks on tiny parts of the dataset...
A human brain has about a hundred trillion synapses. While we have yet to figure out the smallest possible amount of dense-equivalent parameters in transformative AI systems, the AI-2027 forecast relied on having Agent-2 use 10T dense-equivalent parameters and Agent-5 reduce the amount of parameters to 2T. Delaying superhuman coders to 2030 could cause OpenBrain to have ~40 times as much compute and make Agent-2 have ~60T dense-equivalent parameters, roughly equivalent to more than a half of an entire brain. The analogue of Agent-5 would reduce the parameters to ~12T, which is still at distance of 1 OOM from the human brain.