I think it may be better to begin this line of attack by creating a system that can simulate worlds that are not like our world (they are far simpler), and efficiently answer certain counterfactuals about them, looking to find generally robust ways to steer a multipolar world towards the best collective outcomes.
A basic first goal would be to test out new systems of governance like Futarchy and see whether they work or not, for that is currently unknown.
We have never steered any world. Let’s start by steering simpler worlds than our own, and see what that looks like.
I agree that we should start by trying this with far simpler worlds than our own, and with futarchy-style decision-making schemes, where forecasters produce extremely stylized QURI-style models that map from action-space to outcome-space while a broader group of stakeholders defines mappings from output-space to each stakeholder’s utility.
I feel that this is far too ambitious.
I think it may be better to begin this line of attack by creating a system that can simulate worlds that are not like our world (they are far simpler), and efficiently answer certain counterfactuals about them, looking to find generally robust ways to steer a multipolar world towards the best collective outcomes.
A basic first goal would be to test out new systems of governance like Futarchy and see whether they work or not, for that is currently unknown.
We have never steered any world. Let’s start by steering simpler worlds than our own, and see what that looks like.
I agree that we should start by trying this with far simpler worlds than our own, and with futarchy-style decision-making schemes, where forecasters produce extremely stylized QURI-style models that map from action-space to outcome-space while a broader group of stakeholders defines mappings from output-space to each stakeholder’s utility.