If we’re in a sim, it’s being used for acausal trade
Me: Our world is exactly the kind of thing you’d simulate if you were doing acausal trade! It’s just before civilisation develops the ability to lock-in deals.
Sceptic: Sure, but there’s other reasons ppl might simulate earth. Maybe it’s for ppl’s entertainment? Maybe it’s social science, exploring alternate histories?
Me: For sure. But whatever the purpose of the sim is, it will contain info that’s relevant to ppl that want to do acausal trades. It will have info about who has power post-AGI, what their values are, and whether they want to do acausal trade. If someone ran the sim for entertainment, they’d obviously sell that info to the acausal trade folks.
Sceptic: Won’t the acausal trade folks just run their own sims?
Me: Maybe! But they’ll be keen to buy relevant info from others who runs sims. If others run earth sims for entertainment, the acausal trade folks will buy the info and run fewer earth sims themselves.
At this point we can’t be very sure that acausal trade is even a thing, since we don’t have a formal solution to decision theory we can be confident in (or even meet the lower bar of not having clearly serious flaws), and therefore can’t derive acausal trade from it as a solution to some decision problem. (Or have some other way to gain high justified confidence that acausal trade is actually a thing.)
And then even if acausal trade is a thing, whoever is running sims of us may not be running a decision theory that allows or recommends acausal trade, or we may not have anything to trade that they want, or the trade fails for some other reason like high “transaction costs”.
I was reacting to the lack of hedging or explicit uncertainty in the first line “If we’re in a sim, it’s being used for acausal trade”. Possibly Tom didn’t mean to express very high confidence by it (I see that in a more formal occasion he does explicitly state his uncertainty about acausal trade[1]), but I feel an obligation to err on the side of caution here, in case some readers do interpret it as expressing high confidence.
Quote from the link: Of course, unlike the causal case, whether consensus goods get funded depends on whether agents want to do acausal cooperation at all—which depends on their decision theories and their beliefs about their degree of correlation with others.
Ah, cool, yeah, fair enough. I interpreted that as more of a title, and am used to titles usually skipping epistemic qualifiers because you really want them to be short.
But I wouldn’t rule out someone who’s thought a lot about DT being pretty confident. I don’t think you need to need “solve” DT to be v confident that acausal trade is a thing anymore than you need to solve ethics to know that murder is wrong.
I could imagine that some of the acausal trade crowd have thought long enough about the space of decision theories and their implications to conclude that acausal trade is a consequence of many plausible DTs and is very likely happening.
My understanding is that even with CDT you can get sim-based trade (which i’d consider a form of acausal trade), and that on a first pass EDT and UDT both imply that acausal trade makes sense. So we only need some powerful agents to do one of these decision theories for acausal trade to go ahead.
I guess I can imagine a countercase like “bc of threats very few civs do acausal trade”, though it’s hard to see it go down to zero. I’d be curious if you have other counter-cases in mind.
(In general i’d defer to someone who thinks about this more on how likely acausal trade is to happen overall!)
I could imagine that some of the acausal trade crowd have thought long enough about the space of decision theories and their implications to conclude that acausal trade is a consequence of many plausible DTs and is very likely happening.
I’m not aware of anyone who has claimed this, and wouldn’t trust such claims anyway, since humans just aren’t that good at fully delineating the space of plausible solutions to some philosophical question. E.g. for the first half century of decision theory as a field, nobody thought that updatelessness might be worth investigating.
(In general i’d defer to someone who thinks about this more on how likely acausal trade is to happen overall!)
I wouldn’t defer a lot, because humans are also bad at finding flaws in their favorite philosophical ideas. Think of, e.g., theorists/proponents of Objectivism or Communism.
I’d be curious if you have other counter-cases in mind.
There are too many civilizations in the multiverse to simulate all of them. We can only do a sampling and then decide how to trade based on the statistical properties, but this gives an incentive to free-ride (i.e. getting the benefits of trade from other civilizations that did not specifically simulate us, without paying the costs), which may cause an overall breakdown in trade.
tiny nit: non-trade sims could happen late in the universe when the available lightcode isn’t big enough to impact trade much, or people could have already locked in their acausal trade decisions, or something analogous in another universe.
But then, entertainment sims might oversample entertaining outcomes (e.g. rerun an event 10 times and use the branch with the most entertaining result). And then, as a result, acausal trade folks’ marginal willingness to pay for info about how those entertaining worlds turn out would go down. And for similar reasons, the stakes of our actions would then generally be lower in any given entertainment sim.
If someone ran the sim for entertainment, they’d obviously sell that info to the acausal trade folks
Weak argument? The set of times that people incidentally produce information relevant to other people is much broader than the set of times they sell it to them.
I think I mostly agree with what this is actually saying, but I’m not sure it’s definitively acausal trade. e.g. it might be for just mundane trade, dividing gains by simulating Shapley values or something as part of a (very elaborate) cooperative bargain.
You can read about what it’s like to work with us here.
We’re currently hiring researchers, and I’d love LW readers to apply.
If you like writing and reading LessWrong, I think you might also enjoy working at Forethought.
I joined Forethought a year ago, and it’s been pretty transformative for my research. I get lots of feedback on my research and great collaboration opportunities.
The median views of our staff are often different from the median views of LW. E.g. we probably have a lower probability on AI takeover (though I’m still >10% on that). That’s part of the reason i’m excited for LW readers to apply. I think a great way to make intellectual progress is via debate. So we want to hire ppl who strongly disagree with us, and have their own perspectives on what’s going on in AI.
We’ve also got a referral bounty of £10,000 for counterfactual recommendations for successful Senior Research Fellow hires, and £5,000 for Research Fellows.
The deadline for applications is Sunday 2nd November. Happy to answer questions!
When I speak to ppl from DC, I’m told that the government and military will be very slow to adopt new tech.
If these two things are both true, there’s a scary implication.
If tech progress speeds up by 30x relative to recent history, then a 3-year procurement delay by the military means they’re deploying tech that’s effectively 100 years outdated. Even a 1-year delay at that pace means your military is fielding equipment from a completely different technological era. The US military spends ~$1T/year. But with tech that’s 30–100 years more advanced, you could potentially defeat them at 1/100th of the spending — just $10B!
A rogue actor — a private company or a government clique bypassing standard procurement — could spend a tiny fraction of the official military budget on cutting-edge tech and potentially overmatch the entire conventional military.
There’s amassive untapped potential for cheap military dominance that no legitimate actor will exploit bc only the military has legitimate authority to procure weapons and they are bureaucratic.
Here’s a visualisation of the basic dynamic:
Possible solutions:
Military procurement needs to get dramatically faster during an intelligence explosion. New AI systems and AI-produced technologies need to be rapidly integrated into official military capabilities. This probably means automating the procurement process itself. This is counterintuitive from some AI safety perspectives — many people’s instinct is to delay military AI deployment.
Delay rapid tech progress until military procurement has become super fast + safe. This might in practice involve delaying the intelligence explosion and/or the industrial explosion as well
Better monitoring/surveillance to make sure no one is secretly building military tech
If everyone in our universe doing acausal trade coordinates, we can sell “cosmic real estate” for monopoly prices
Let’s assume that there are many different universes (or Everett branches) that acausally trade.
Some traders won’t about “resources in our civ’s future lightcone” linearly. As a toy example, the leader of a distant alien civilisation might want to get a statue of themselves in as many different other universes as possible.
If many different actors in our universe do acausal trade, and compete with each other to trade with the alien leader, then they’d bid down the price of building that statue. Whereas if they all band together, they could hold out for a much higher price. So it could be in our collective interests to coordinate and “price fix”.
This is an example of a civilisation-wide public good that could be important long into the future.
If we’re in a sim, it’s being used for acausal trade
Me: Our world is exactly the kind of thing you’d simulate if you were doing acausal trade! It’s just before civilisation develops the ability to lock-in deals.
Sceptic: Sure, but there’s other reasons ppl might simulate earth. Maybe it’s for ppl’s entertainment? Maybe it’s social science, exploring alternate histories?
Me: For sure. But whatever the purpose of the sim is, it will contain info that’s relevant to ppl that want to do acausal trades. It will have info about who has power post-AGI, what their values are, and whether they want to do acausal trade. If someone ran the sim for entertainment, they’d obviously sell that info to the acausal trade folks.
Sceptic: Won’t the acausal trade folks just run their own sims?
Me: Maybe! But they’ll be keen to buy relevant info from others who runs sims. If others run earth sims for entertainment, the acausal trade folks will buy the info and run fewer earth sims themselves.
At this point we can’t be very sure that acausal trade is even a thing, since we don’t have a formal solution to decision theory we can be confident in (or even meet the lower bar of not having clearly serious flaws), and therefore can’t derive acausal trade from it as a solution to some decision problem. (Or have some other way to gain high justified confidence that acausal trade is actually a thing.)
And then even if acausal trade is a thing, whoever is running sims of us may not be running a decision theory that allows or recommends acausal trade, or we may not have anything to trade that they want, or the trade fails for some other reason like high “transaction costs”.
Do we need to be “very sure”? Seems like the OP doesn’t assert any enormous confidence.
I was reacting to the lack of hedging or explicit uncertainty in the first line “If we’re in a sim, it’s being used for acausal trade”. Possibly Tom didn’t mean to express very high confidence by it (I see that in a more formal occasion he does explicitly state his uncertainty about acausal trade[1]), but I feel an obligation to err on the side of caution here, in case some readers do interpret it as expressing high confidence.
Quote from the link: Of course, unlike the causal case, whether consensus goods get funded depends on whether agents want to do acausal cooperation at all—which depends on their decision theories and their beliefs about their degree of correlation with others.
Ah, cool, yeah, fair enough. I interpreted that as more of a title, and am used to titles usually skipping epistemic qualifiers because you really want them to be short.
Thanks, i’m not personally “very sure” either
But I wouldn’t rule out someone who’s thought a lot about DT being pretty confident. I don’t think you need to need “solve” DT to be v confident that acausal trade is a thing anymore than you need to solve ethics to know that murder is wrong.
I could imagine that some of the acausal trade crowd have thought long enough about the space of decision theories and their implications to conclude that acausal trade is a consequence of many plausible DTs and is very likely happening.
My understanding is that even with CDT you can get sim-based trade (which i’d consider a form of acausal trade), and that on a first pass EDT and UDT both imply that acausal trade makes sense. So we only need some powerful agents to do one of these decision theories for acausal trade to go ahead.
I guess I can imagine a countercase like “bc of threats very few civs do acausal trade”, though it’s hard to see it go down to zero. I’d be curious if you have other counter-cases in mind.
(In general i’d defer to someone who thinks about this more on how likely acausal trade is to happen overall!)
I’m not aware of anyone who has claimed this, and wouldn’t trust such claims anyway, since humans just aren’t that good at fully delineating the space of plausible solutions to some philosophical question. E.g. for the first half century of decision theory as a field, nobody thought that updatelessness might be worth investigating.
I wouldn’t defer a lot, because humans are also bad at finding flaws in their favorite philosophical ideas. Think of, e.g., theorists/proponents of Objectivism or Communism.
There are too many civilizations in the multiverse to simulate all of them. We can only do a sampling and then decide how to trade based on the statistical properties, but this gives an incentive to free-ride (i.e. getting the benefits of trade from other civilizations that did not specifically simulate us, without paying the costs), which may cause an overall breakdown in trade.
tiny nit: non-trade sims could happen late in the universe when the available lightcode isn’t big enough to impact trade much, or people could have already locked in their acausal trade decisions, or something analogous in another universe.
Interesting! Continuing this chain of thought…
But then, entertainment sims might oversample entertaining outcomes (e.g. rerun an event 10 times and use the branch with the most entertaining result). And then, as a result, acausal trade folks’ marginal willingness to pay for info about how those entertaining worlds turn out would go down. And for similar reasons, the stakes of our actions would then generally be lower in any given entertainment sim.
How do you “buy” info from another universe? They can’t respond.
It’s not from another universe, just from your neighbours who run different simulations.
Weak argument? The set of times that people incidentally produce information relevant to other people is much broader than the set of times they sell it to them.
I think I mostly agree with what this is actually saying, but I’m not sure it’s definitively acausal trade. e.g. it might be for just mundane trade, dividing gains by simulating Shapley values or something as part of a (very elaborate) cooperative bargain.
Forethought is hiring!
You can see our research here.
You can read about what it’s like to work with us here.
We’re currently hiring researchers, and I’d love LW readers to apply.
If you like writing and reading LessWrong, I think you might also enjoy working at Forethought.
I joined Forethought a year ago, and it’s been pretty transformative for my research. I get lots of feedback on my research and great collaboration opportunities.
The median views of our staff are often different from the median views of LW. E.g. we probably have a lower probability on AI takeover (though I’m still >10% on that). That’s part of the reason i’m excited for LW readers to apply. I think a great way to make intellectual progress is via debate. So we want to hire ppl who strongly disagree with us, and have their own perspectives on what’s going on in AI.
We’ve also got a referral bounty of £10,000 for counterfactual recommendations for successful Senior Research Fellow hires, and £5,000 for Research Fellows.
The deadline for applications is Sunday 2nd November. Happy to answer questions!
Accelerating tech progress + slow military procurement → cheap decisive strategic advantage
A common prediction of an intelligence explosion is that tech progress gets faster and faster.
When I speak to ppl from DC, I’m told that the government and military will be very slow to adopt new tech.
If these two things are both true, there’s a scary implication.
If tech progress speeds up by 30x relative to recent history, then a 3-year procurement delay by the military means they’re deploying tech that’s effectively 100 years outdated. Even a 1-year delay at that pace means your military is fielding equipment from a completely different technological era. The US military spends ~$1T/year. But with tech that’s 30–100 years more advanced, you could potentially defeat them at 1/100th of the spending — just $10B!
A rogue actor — a private company or a government clique bypassing standard procurement — could spend a tiny fraction of the official military budget on cutting-edge tech and potentially overmatch the entire conventional military.
There’s a massive untapped potential for cheap military dominance that no legitimate actor will exploit bc only the military has legitimate authority to procure weapons and they are bureaucratic.
Here’s a visualisation of the basic dynamic:
Possible solutions:
Military procurement needs to get dramatically faster during an intelligence explosion. New AI systems and AI-produced technologies need to be rapidly integrated into official military capabilities. This probably means automating the procurement process itself. This is counterintuitive from some AI safety perspectives — many people’s instinct is to delay military AI deployment.
Delay rapid tech progress until military procurement has become super fast + safe. This might in practice involve delaying the intelligence explosion and/or the industrial explosion as well
Better monitoring/surveillance to make sure no one is secretly building military tech
Others?
If everyone in our universe doing acausal trade coordinates, we can sell “cosmic real estate” for monopoly prices
Let’s assume that there are many different universes (or Everett branches) that acausally trade.
Some traders won’t about “resources in our civ’s future lightcone” linearly. As a toy example, the leader of a distant alien civilisation might want to get a statue of themselves in as many different other universes as possible.
If many different actors in our universe do acausal trade, and compete with each other to trade with the alien leader, then they’d bid down the price of building that statue. Whereas if they all band together, they could hold out for a much higher price. So it could be in our collective interests to coordinate and “price fix”.
This is an example of a civilisation-wide public good that could be important long into the future.