If we’re in a sim, it’s being used for acausal trade
Me: Our world is exactly the kind of thing you’d simulate if you were doing acausal trade! It’s just before civilisation develops the ability to lock-in deals.
Sceptic: Sure, but there’s other reasons ppl might simulate earth. Maybe it’s for ppl’s entertainment? Maybe it’s social science, exploring alternate histories?
Me: For sure. But whatever the purpose of the sim is, it will contain info that’s relevant to ppl that want to do acausal trades. It will have info about who has power post-AGI, what their values are, and whether they want to do acausal trade. If someone ran the sim for entertainment, they’d obviously sell that info to the acausal trade folks.
Sceptic: Won’t the acausal trade folks just run their own sims?
Me: Maybe! But they’ll be keen to buy relevant info from others who runs sims. If others run earth sims for entertainment, the acausal trade folks will buy the info and run fewer earth sims themselves.
At this point we can’t be very sure that acausal trade is even a thing, since we don’t have a formal solution to decision theory we can be confident in (or even meet the lower bar of not having clearly serious flaws), and therefore can’t derive acausal trade from it as a solution to some decision problem. (Or have some other way to gain high justified confidence that acausal trade is actually a thing.)
And then even if acausal trade is a thing, whoever is running sims of us may not be running a decision theory that allows or recommends acausal trade, or we may not have anything to trade that they want, or the trade fails for some other reason like high “transaction costs”.
I was reacting to the lack of hedging or explicit uncertainty in the first line “If we’re in a sim, it’s being used for acausal trade”. Possibly Tom didn’t mean to express very high confidence by it (I see that in a more formal occasion he does explicitly state his uncertainty about acausal trade[1]), but I feel an obligation to err on the side of caution here, in case some readers do interpret it as expressing high confidence.
Quote from the link: Of course, unlike the causal case, whether consensus goods get funded depends on whether agents want to do acausal cooperation at all—which depends on their decision theories and their beliefs about their degree of correlation with others.
Ah, cool, yeah, fair enough. I interpreted that as more of a title, and am used to titles usually skipping epistemic qualifiers because you really want them to be short.
tiny nit: non-trade sims could happen late in the universe when the available lightcode isn’t big enough to impact trade much, or people could have already locked in their acausal trade decisions, or something analogous in another universe.
If someone ran the sim for entertainment, they’d obviously sell that info to the acausal trade folks
Weak argument? The set of times that people incidentally produce information relevant to other people is much broader than the set of times they sell it to them.
I think I mostly agree with what this is actually saying, but I’m not sure it’s definitively acausal trade. e.g. it might be for just mundane trade, dividing gains by simulating Shapley values or something as part of a (very elaborate) cooperative bargain.
But then, entertainment sims might oversample entertaining outcomes (e.g. rerun an event 10 times and use the branch with the most entertaining result). And then, as a result, acausal trade folks’ marginal willingness to pay for info about how those entertaining worlds turn out would go down. And for similar reasons, the stakes of our actions would then generally be lower in any given entertainment sim.
You can read about what it’s like to work with us here.
We’re currently hiring researchers, and I’d love LW readers to apply.
If you like writing and reading LessWrong, I think you might also enjoy working at Forethought.
I joined Forethought a year ago, and it’s been pretty transformative for my research. I get lots of feedback on my research and great collaboration opportunities.
The median views of our staff are often different from the median views of LW. E.g. we probably have a lower probability on AI takeover (though I’m still >10% on that). That’s part of the reason i’m excited for LW readers to apply. I think a great way to make intellectual progress is via debate. So we want to hire ppl who strongly disagree with us, and have their own perspectives on what’s going on in AI.
We’ve also got a referral bounty of £10,000 for counterfactual recommendations for successful Senior Research Fellow hires, and £5,000 for Research Fellows.
The deadline for applications is Sunday 2nd November. Happy to answer questions!
When I speak to ppl from DC, I’m told that the government and military will be very slow to adopt new tech.
If these two things are both true, there’s a scary implication.
If tech progress speeds up by 30x relative to recent history, then a 3-year procurement delay by the military means they’re deploying tech that’s effectively 100 years outdated. Even a 1-year delay at that pace means your military is fielding equipment from a completely different technological era. The US military spends ~$1T/year. But with tech that’s 30–100 years more advanced, you could potentially defeat them at 1/100th of the spending — just $10B!
A rogue actor — a private company or a government clique bypassing standard procurement — could spend a tiny fraction of the official military budget on cutting-edge tech and potentially overmatch the entire conventional military.
There’s amassive untapped potential for cheap military dominance that no legitimate actor will exploit bc only the military has legitimate authority to procure weapons and they are bureaucratic.
Here’s a visualisation of the basic dynamic:
Possible solutions:
Military procurement needs to get dramatically faster during an intelligence explosion. New AI systems and AI-produced technologies need to be rapidly integrated into official military capabilities. This probably means automating the procurement process itself. This is counterintuitive from some AI safety perspectives — many people’s instinct is to delay military AI deployment.
Delay rapid tech progress until military procurement has become super fast + safe. This might in practice involve delaying the intelligence explosion and/or the industrial explosion as well
Better monitoring/surveillance to make sure no one is secretly building military tech
If we’re in a sim, it’s being used for acausal trade
Me: Our world is exactly the kind of thing you’d simulate if you were doing acausal trade! It’s just before civilisation develops the ability to lock-in deals.
Sceptic: Sure, but there’s other reasons ppl might simulate earth. Maybe it’s for ppl’s entertainment? Maybe it’s social science, exploring alternate histories?
Me: For sure. But whatever the purpose of the sim is, it will contain info that’s relevant to ppl that want to do acausal trades. It will have info about who has power post-AGI, what their values are, and whether they want to do acausal trade. If someone ran the sim for entertainment, they’d obviously sell that info to the acausal trade folks.
Sceptic: Won’t the acausal trade folks just run their own sims?
Me: Maybe! But they’ll be keen to buy relevant info from others who runs sims. If others run earth sims for entertainment, the acausal trade folks will buy the info and run fewer earth sims themselves.
At this point we can’t be very sure that acausal trade is even a thing, since we don’t have a formal solution to decision theory we can be confident in (or even meet the lower bar of not having clearly serious flaws), and therefore can’t derive acausal trade from it as a solution to some decision problem. (Or have some other way to gain high justified confidence that acausal trade is actually a thing.)
And then even if acausal trade is a thing, whoever is running sims of us may not be running a decision theory that allows or recommends acausal trade, or we may not have anything to trade that they want, or the trade fails for some other reason like high “transaction costs”.
Do we need to be “very sure”? Seems like the OP doesn’t assert any enormous confidence.
I was reacting to the lack of hedging or explicit uncertainty in the first line “If we’re in a sim, it’s being used for acausal trade”. Possibly Tom didn’t mean to express very high confidence by it (I see that in a more formal occasion he does explicitly state his uncertainty about acausal trade[1]), but I feel an obligation to err on the side of caution here, in case some readers do interpret it as expressing high confidence.
Quote from the link: Of course, unlike the causal case, whether consensus goods get funded depends on whether agents want to do acausal cooperation at all—which depends on their decision theories and their beliefs about their degree of correlation with others.
Ah, cool, yeah, fair enough. I interpreted that as more of a title, and am used to titles usually skipping epistemic qualifiers because you really want them to be short.
tiny nit: non-trade sims could happen late in the universe when the available lightcode isn’t big enough to impact trade much, or people could have already locked in their acausal trade decisions, or something analogous in another universe.
How do you “buy” info from another universe? They can’t respond.
It’s not from another universe, just from your neighbours who run different simulations.
Weak argument? The set of times that people incidentally produce information relevant to other people is much broader than the set of times they sell it to them.
I think I mostly agree with what this is actually saying, but I’m not sure it’s definitively acausal trade. e.g. it might be for just mundane trade, dividing gains by simulating Shapley values or something as part of a (very elaborate) cooperative bargain.
Interesting! Continuing this chain of thought…
But then, entertainment sims might oversample entertaining outcomes (e.g. rerun an event 10 times and use the branch with the most entertaining result). And then, as a result, acausal trade folks’ marginal willingness to pay for info about how those entertaining worlds turn out would go down. And for similar reasons, the stakes of our actions would then generally be lower in any given entertainment sim.
Forethought is hiring!
You can see our research here.
You can read about what it’s like to work with us here.
We’re currently hiring researchers, and I’d love LW readers to apply.
If you like writing and reading LessWrong, I think you might also enjoy working at Forethought.
I joined Forethought a year ago, and it’s been pretty transformative for my research. I get lots of feedback on my research and great collaboration opportunities.
The median views of our staff are often different from the median views of LW. E.g. we probably have a lower probability on AI takeover (though I’m still >10% on that). That’s part of the reason i’m excited for LW readers to apply. I think a great way to make intellectual progress is via debate. So we want to hire ppl who strongly disagree with us, and have their own perspectives on what’s going on in AI.
We’ve also got a referral bounty of £10,000 for counterfactual recommendations for successful Senior Research Fellow hires, and £5,000 for Research Fellows.
The deadline for applications is Sunday 2nd November. Happy to answer questions!
Accelerating tech progress + slow military procurement → cheap decisive strategic advantage
A common prediction of an intelligence explosion is that tech progress gets faster and faster.
When I speak to ppl from DC, I’m told that the government and military will be very slow to adopt new tech.
If these two things are both true, there’s a scary implication.
If tech progress speeds up by 30x relative to recent history, then a 3-year procurement delay by the military means they’re deploying tech that’s effectively 100 years outdated. Even a 1-year delay at that pace means your military is fielding equipment from a completely different technological era. The US military spends ~$1T/year. But with tech that’s 30–100 years more advanced, you could potentially defeat them at 1/100th of the spending — just $10B!
A rogue actor — a private company or a government clique bypassing standard procurement — could spend a tiny fraction of the official military budget on cutting-edge tech and potentially overmatch the entire conventional military.
There’s a massive untapped potential for cheap military dominance that no legitimate actor will exploit bc only the military has legitimate authority to procure weapons and they are bureaucratic.
Here’s a visualisation of the basic dynamic:
Possible solutions:
Military procurement needs to get dramatically faster during an intelligence explosion. New AI systems and AI-produced technologies need to be rapidly integrated into official military capabilities. This probably means automating the procurement process itself. This is counterintuitive from some AI safety perspectives — many people’s instinct is to delay military AI deployment.
Delay rapid tech progress until military procurement has become super fast + safe. This might in practice involve delaying the intelligence explosion and/or the industrial explosion as well
Better monitoring/surveillance to make sure no one is secretly building military tech
Others?