The major difference between the models seems to be Eliezer’s model has an orthogonality thesis and high scalability for AGI, while the Foresight model has them bounded much closer to human in both behaviour and capabilities.
Yes, it makes sense to think about institutions and protocols for coordinating with entities that are basically just smart humans except electronic. If that’s all we get, then such models may be of some benefit. However, it makes a lot less sense to advocate for such models as a cause for optimism and enthusiastic work on creating AGI, and that is what they’re doing.
The major difference between the models seems to be Eliezer’s model has an orthogonality thesis and high scalability for AGI, while the Foresight model has them bounded much closer to human in both behaviour and capabilities.
Yes, it makes sense to think about institutions and protocols for coordinating with entities that are basically just smart humans except electronic. If that’s all we get, then such models may be of some benefit. However, it makes a lot less sense to advocate for such models as a cause for optimism and enthusiastic work on creating AGI, and that is what they’re doing.