I would have labelled Camp A as the control camp (control of the winner over everyone else) and Camp B as the mutual cooperation camp (since ending the race is the fruit of cooperation between nations). If we decide to keep racing for superintelligence, what does the finish line look like, in your view?
I think @rife is talking either about mutual cooperation betwen safety advocates and capabilities researchers, or mutual cooperation between humans and AIs.
Cooperation between humans and AIs rather than an attempt to control AIs. I think the race is going to happen regardless of who drops out of it. If those who are in the lead eventually land on mutual alignment, then we stand a chance. We’re not going to outsmart the AIs nor will we stay on control of them, nor should we.
I would have labelled Camp A as the control camp (control of the winner over everyone else) and Camp B as the mutual cooperation camp (since ending the race is the fruit of cooperation between nations). If we decide to keep racing for superintelligence, what does the finish line look like, in your view?
I think @rife is talking either about mutual cooperation betwen safety advocates and capabilities researchers, or mutual cooperation between humans and AIs.
Cooperation between humans and AIs rather than an attempt to control AIs. I think the race is going to happen regardless of who drops out of it. If those who are in the lead eventually land on mutual alignment, then we stand a chance. We’re not going to outsmart the AIs nor will we stay on control of them, nor should we.